Jan 20 07:00:48.575053 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 04:11:16 -00 2026 Jan 20 07:00:48.575103 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 07:00:48.575114 kernel: BIOS-provided physical RAM map: Jan 20 07:00:48.575121 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 20 07:00:48.575128 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 20 07:00:48.575135 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 07:00:48.575152 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 20 07:00:48.575190 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 20 07:00:48.575197 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 07:00:48.575204 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 07:00:48.575212 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 07:00:48.575219 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 07:00:48.575226 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 20 07:00:48.575233 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 07:00:48.575252 kernel: NX (Execute Disable) protection: active Jan 20 07:00:48.575260 kernel: APIC: Static calls initialized Jan 20 07:00:48.575287 kernel: SMBIOS 2.8 present. Jan 20 07:00:48.575295 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 20 07:00:48.575303 kernel: DMI: Memory slots populated: 1/1 Jan 20 07:00:48.575318 kernel: Hypervisor detected: KVM Jan 20 07:00:48.575325 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 20 07:00:48.575333 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 07:00:48.575340 kernel: kvm-clock: using sched offset of 9958065906 cycles Jan 20 07:00:48.575349 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 07:00:48.575357 kernel: tsc: Detected 1999.999 MHz processor Jan 20 07:00:48.575365 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 07:00:48.575373 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 07:00:48.575390 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 20 07:00:48.575398 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 07:00:48.575406 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 07:00:48.575414 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 20 07:00:48.575422 kernel: Using GB pages for direct mapping Jan 20 07:00:48.575430 kernel: ACPI: Early table checksum verification disabled Jan 20 07:00:48.575438 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 20 07:00:48.575446 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 07:00:48.575461 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 07:00:48.575469 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 07:00:48.575477 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 20 07:00:48.575485 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 07:00:48.575493 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 07:00:48.575510 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 07:00:48.575526 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 07:00:48.575535 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 20 07:00:48.575543 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 20 07:00:48.575577 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 20 07:00:48.575586 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 20 07:00:48.575603 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 20 07:00:48.575612 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 20 07:00:48.575620 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 20 07:00:48.575628 kernel: No NUMA configuration found Jan 20 07:00:48.575636 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 20 07:00:48.575645 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jan 20 07:00:48.575653 kernel: Zone ranges: Jan 20 07:00:48.575673 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 07:00:48.575681 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 20 07:00:48.575689 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 20 07:00:48.575697 kernel: Device empty Jan 20 07:00:48.575706 kernel: Movable zone start for each node Jan 20 07:00:48.575714 kernel: Early memory node ranges Jan 20 07:00:48.575722 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 07:00:48.575749 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 20 07:00:48.575765 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 20 07:00:48.575790 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 20 07:00:48.575798 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 07:00:48.575806 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 07:00:48.575830 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 20 07:00:48.575839 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 07:00:48.575847 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 07:00:48.575864 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 07:00:48.575872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 07:00:48.575880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 07:00:48.575905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 07:00:48.575913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 07:00:48.575921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 07:00:48.575930 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 07:00:48.575946 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 07:00:48.575955 kernel: TSC deadline timer available Jan 20 07:00:48.575963 kernel: CPU topo: Max. logical packages: 1 Jan 20 07:00:48.575971 kernel: CPU topo: Max. logical dies: 1 Jan 20 07:00:48.575979 kernel: CPU topo: Max. dies per package: 1 Jan 20 07:00:48.575986 kernel: CPU topo: Max. threads per core: 1 Jan 20 07:00:48.575994 kernel: CPU topo: Num. cores per package: 2 Jan 20 07:00:48.576009 kernel: CPU topo: Num. threads per package: 2 Jan 20 07:00:48.576017 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 20 07:00:48.576025 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 07:00:48.576033 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 07:00:48.576041 kernel: kvm-guest: setup PV sched yield Jan 20 07:00:48.576049 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 07:00:48.576056 kernel: Booting paravirtualized kernel on KVM Jan 20 07:00:48.576064 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 07:00:48.576080 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 20 07:00:48.576088 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 20 07:00:48.576095 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 20 07:00:48.576103 kernel: pcpu-alloc: [0] 0 1 Jan 20 07:00:48.576111 kernel: kvm-guest: PV spinlocks enabled Jan 20 07:00:48.576119 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 07:00:48.576128 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 07:00:48.576143 kernel: random: crng init done Jan 20 07:00:48.576151 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 07:00:48.576159 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 07:00:48.576170 kernel: Fallback order for Node 0: 0 Jan 20 07:00:48.576183 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jan 20 07:00:48.576194 kernel: Policy zone: Normal Jan 20 07:00:48.576217 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 07:00:48.576230 kernel: software IO TLB: area num 2. Jan 20 07:00:48.576238 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 07:00:48.576246 kernel: ftrace: allocating 40128 entries in 157 pages Jan 20 07:00:48.576254 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 07:00:48.576262 kernel: Dynamic Preempt: voluntary Jan 20 07:00:48.576285 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 07:00:48.576304 kernel: rcu: RCU event tracing is enabled. Jan 20 07:00:48.576312 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 07:00:48.576320 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 07:00:48.576328 kernel: Rude variant of Tasks RCU enabled. Jan 20 07:00:48.576336 kernel: Tracing variant of Tasks RCU enabled. Jan 20 07:00:48.576344 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 07:00:48.576352 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 07:00:48.576360 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 07:00:48.576406 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 07:00:48.576415 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 07:00:48.576430 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 20 07:00:48.576438 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 07:00:48.576446 kernel: Console: colour VGA+ 80x25 Jan 20 07:00:48.576466 kernel: printk: legacy console [tty0] enabled Jan 20 07:00:48.576475 kernel: printk: legacy console [ttyS0] enabled Jan 20 07:00:48.576483 kernel: ACPI: Core revision 20240827 Jan 20 07:00:48.576499 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 07:00:48.576507 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 07:00:48.576515 kernel: x2apic enabled Jan 20 07:00:48.576523 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 07:00:48.576531 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 07:00:48.576608 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 07:00:48.576618 kernel: kvm-guest: setup PV IPIs Jan 20 07:00:48.576627 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 07:00:48.576635 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Jan 20 07:00:48.576644 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Jan 20 07:00:48.576652 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 07:00:48.576660 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 07:00:48.576679 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 07:00:48.576687 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 07:00:48.576695 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 07:00:48.576703 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 07:00:48.576712 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 20 07:00:48.576720 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 20 07:00:48.576728 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 20 07:00:48.576744 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 07:00:48.576753 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 07:00:48.576761 kernel: active return thunk: srso_alias_return_thunk Jan 20 07:00:48.576769 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 07:00:48.576778 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 07:00:48.576786 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 07:00:48.576801 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 07:00:48.576810 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 07:00:48.576818 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 07:00:48.576826 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 20 07:00:48.576834 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 07:00:48.576862 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 20 07:00:48.576870 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 20 07:00:48.576887 kernel: Freeing SMP alternatives memory: 32K Jan 20 07:00:48.576895 kernel: pid_max: default: 32768 minimum: 301 Jan 20 07:00:48.576903 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 07:00:48.576911 kernel: landlock: Up and running. Jan 20 07:00:48.576920 kernel: SELinux: Initializing. Jan 20 07:00:48.576928 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 07:00:48.576936 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 07:00:48.576951 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 07:00:48.576960 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 20 07:00:48.576968 kernel: ... version: 0 Jan 20 07:00:48.576976 kernel: ... bit width: 48 Jan 20 07:00:48.576984 kernel: ... generic registers: 6 Jan 20 07:00:48.576992 kernel: ... value mask: 0000ffffffffffff Jan 20 07:00:48.576999 kernel: ... max period: 00007fffffffffff Jan 20 07:00:48.577007 kernel: ... fixed-purpose events: 0 Jan 20 07:00:48.577022 kernel: ... event mask: 000000000000003f Jan 20 07:00:48.577030 kernel: signal: max sigframe size: 3376 Jan 20 07:00:48.577038 kernel: rcu: Hierarchical SRCU implementation. Jan 20 07:00:48.577046 kernel: rcu: Max phase no-delay instances is 400. Jan 20 07:00:48.577054 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 07:00:48.577062 kernel: smp: Bringing up secondary CPUs ... Jan 20 07:00:48.577070 kernel: smpboot: x86: Booting SMP configuration: Jan 20 07:00:48.577085 kernel: .... node #0, CPUs: #1 Jan 20 07:00:48.577109 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 07:00:48.577117 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Jan 20 07:00:48.577125 kernel: Memory: 3977432K/4193772K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 210912K reserved, 0K cma-reserved) Jan 20 07:00:48.577133 kernel: devtmpfs: initialized Jan 20 07:00:48.577141 kernel: x86/mm: Memory block size: 128MB Jan 20 07:00:48.577150 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 07:00:48.577167 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 07:00:48.577175 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 07:00:48.577183 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 07:00:48.577191 kernel: audit: initializing netlink subsys (disabled) Jan 20 07:00:48.577199 kernel: audit: type=2000 audit(1768892443.197:1): state=initialized audit_enabled=0 res=1 Jan 20 07:00:48.577207 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 07:00:48.577215 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 07:00:48.577230 kernel: cpuidle: using governor menu Jan 20 07:00:48.577238 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 07:00:48.577246 kernel: dca service started, version 1.12.1 Jan 20 07:00:48.577254 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 07:00:48.577262 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 07:00:48.577270 kernel: PCI: Using configuration type 1 for base access Jan 20 07:00:48.577278 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 07:00:48.577294 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 07:00:48.577318 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 07:00:48.577326 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 07:00:48.577334 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 07:00:48.577342 kernel: ACPI: Added _OSI(Module Device) Jan 20 07:00:48.577350 kernel: ACPI: Added _OSI(Processor Device) Jan 20 07:00:48.577358 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 07:00:48.577374 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 07:00:48.577382 kernel: ACPI: Interpreter enabled Jan 20 07:00:48.577390 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 07:00:48.577398 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 07:00:48.577406 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 07:00:48.577414 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 07:00:48.577422 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 07:00:48.577437 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 07:00:48.577820 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 07:00:48.578057 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 07:00:48.578283 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 07:00:48.578294 kernel: PCI host bridge to bus 0000:00 Jan 20 07:00:48.578524 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 07:00:48.578829 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 07:00:48.579034 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 07:00:48.579232 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 20 07:00:48.579430 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 07:00:48.579681 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 20 07:00:48.579906 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 07:00:48.580353 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 07:00:48.580610 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 07:00:48.580832 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 07:00:48.581045 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 07:00:48.581385 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 07:00:48.581635 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 07:00:48.581865 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 20 07:00:48.582081 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jan 20 07:00:48.582483 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 07:00:48.582717 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 07:00:48.582944 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 07:00:48.583276 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 20 07:00:48.583495 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 07:00:48.583730 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 07:00:48.583946 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 07:00:48.584360 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 07:00:48.584613 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 07:00:48.584845 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 07:00:48.585058 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jan 20 07:00:48.585380 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jan 20 07:00:48.585651 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 07:00:48.585899 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 07:00:48.585950 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 07:00:48.585959 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 07:00:48.585968 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 07:00:48.585976 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 07:00:48.585984 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 07:00:48.585992 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 07:00:48.586009 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 07:00:48.586017 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 07:00:48.586026 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 07:00:48.586034 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 07:00:48.586042 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 07:00:48.586051 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 07:00:48.586059 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 07:00:48.586077 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 07:00:48.586086 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 07:00:48.586094 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 07:00:48.586102 kernel: iommu: Default domain type: Translated Jan 20 07:00:48.586111 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 07:00:48.586119 kernel: PCI: Using ACPI for IRQ routing Jan 20 07:00:48.586127 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 07:00:48.586136 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 20 07:00:48.586151 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 20 07:00:48.586373 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 07:00:48.586609 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 07:00:48.586826 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 07:00:48.586837 kernel: vgaarb: loaded Jan 20 07:00:48.586846 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 07:00:48.586867 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 07:00:48.586875 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 07:00:48.586883 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 07:00:48.586892 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 07:00:48.586900 kernel: pnp: PnP ACPI init Jan 20 07:00:48.587137 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 07:00:48.587150 kernel: pnp: PnP ACPI: found 5 devices Jan 20 07:00:48.587170 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 07:00:48.587178 kernel: NET: Registered PF_INET protocol family Jan 20 07:00:48.587186 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 07:00:48.587194 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 07:00:48.587202 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 07:00:48.587210 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 07:00:48.587218 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 07:00:48.587235 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 07:00:48.587243 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 07:00:48.587251 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 07:00:48.587259 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 07:00:48.587267 kernel: NET: Registered PF_XDP protocol family Jan 20 07:00:48.587473 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 07:00:48.587716 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 07:00:48.587938 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 07:00:48.588140 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 20 07:00:48.588340 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 07:00:48.588541 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 20 07:00:48.588569 kernel: PCI: CLS 0 bytes, default 64 Jan 20 07:00:48.588578 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 20 07:00:48.588598 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 20 07:00:48.588606 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Jan 20 07:00:48.588614 kernel: Initialise system trusted keyrings Jan 20 07:00:48.588622 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 07:00:48.588630 kernel: Key type asymmetric registered Jan 20 07:00:48.588638 kernel: Asymmetric key parser 'x509' registered Jan 20 07:00:48.588646 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 07:00:48.588663 kernel: io scheduler mq-deadline registered Jan 20 07:00:48.588671 kernel: io scheduler kyber registered Jan 20 07:00:48.588678 kernel: io scheduler bfq registered Jan 20 07:00:48.588687 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 07:00:48.588695 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 07:00:48.588704 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 07:00:48.588712 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 07:00:48.588720 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 07:00:48.588737 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 07:00:48.588745 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 07:00:48.588753 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 07:00:48.588991 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 20 07:00:48.589004 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 20 07:00:48.589384 kernel: rtc_cmos 00:03: registered as rtc0 Jan 20 07:00:48.589646 kernel: rtc_cmos 00:03: setting system clock to 2026-01-20T07:00:45 UTC (1768892445) Jan 20 07:00:48.589858 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 07:00:48.589869 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 07:00:48.589878 kernel: NET: Registered PF_INET6 protocol family Jan 20 07:00:48.589886 kernel: Segment Routing with IPv6 Jan 20 07:00:48.589894 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 07:00:48.589903 kernel: NET: Registered PF_PACKET protocol family Jan 20 07:00:48.589923 kernel: Key type dns_resolver registered Jan 20 07:00:48.589931 kernel: IPI shorthand broadcast: enabled Jan 20 07:00:48.589940 kernel: sched_clock: Marking stable (4158004028, 381750056)->(4725291841, -185537757) Jan 20 07:00:48.589948 kernel: registered taskstats version 1 Jan 20 07:00:48.589956 kernel: Loading compiled-in X.509 certificates Jan 20 07:00:48.589965 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3e9049adf8f1d71dd06c731465288f6e1d353052' Jan 20 07:00:48.589973 kernel: Demotion targets for Node 0: null Jan 20 07:00:48.589988 kernel: Key type .fscrypt registered Jan 20 07:00:48.589996 kernel: Key type fscrypt-provisioning registered Jan 20 07:00:48.590005 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 07:00:48.590013 kernel: ima: Allocated hash algorithm: sha1 Jan 20 07:00:48.590021 kernel: ima: No architecture policies found Jan 20 07:00:48.590029 kernel: clk: Disabling unused clocks Jan 20 07:00:48.590037 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 20 07:00:48.590053 kernel: Write protecting the kernel read-only data: 47104k Jan 20 07:00:48.590061 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 20 07:00:48.590069 kernel: Run /init as init process Jan 20 07:00:48.590077 kernel: with arguments: Jan 20 07:00:48.590085 kernel: /init Jan 20 07:00:48.590094 kernel: with environment: Jan 20 07:00:48.590102 kernel: HOME=/ Jan 20 07:00:48.590165 kernel: TERM=linux Jan 20 07:00:48.590181 kernel: SCSI subsystem initialized Jan 20 07:00:48.590190 kernel: libata version 3.00 loaded. Jan 20 07:00:48.590627 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 07:00:48.590641 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 07:00:48.590856 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 07:00:48.591240 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 07:00:48.591474 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 07:00:48.591770 kernel: scsi host0: ahci Jan 20 07:00:48.592010 kernel: scsi host1: ahci Jan 20 07:00:48.592455 kernel: scsi host2: ahci Jan 20 07:00:48.592738 kernel: scsi host3: ahci Jan 20 07:00:48.592996 kernel: scsi host4: ahci Jan 20 07:00:48.593384 kernel: scsi host5: ahci Jan 20 07:00:48.593396 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Jan 20 07:00:48.593406 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Jan 20 07:00:48.593415 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Jan 20 07:00:48.593424 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Jan 20 07:00:48.593445 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Jan 20 07:00:48.593455 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Jan 20 07:00:48.593464 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 07:00:48.593473 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 07:00:48.593482 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 07:00:48.593491 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 07:00:48.593501 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 20 07:00:48.593518 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 07:00:48.597834 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jan 20 07:00:48.598087 kernel: scsi host6: Virtio SCSI HBA Jan 20 07:00:48.598348 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 20 07:00:48.598652 kernel: sd 6:0:0:0: Power-on or device reset occurred Jan 20 07:00:48.598901 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 20 07:00:48.599165 kernel: sd 6:0:0:0: [sda] Write Protect is off Jan 20 07:00:48.599401 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 20 07:00:48.599671 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 20 07:00:48.599685 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 07:00:48.599694 kernel: GPT:25804799 != 167739391 Jan 20 07:00:48.599702 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 07:00:48.599724 kernel: GPT:25804799 != 167739391 Jan 20 07:00:48.599740 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 07:00:48.599749 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 07:00:48.599996 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Jan 20 07:00:48.600009 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 07:00:48.600018 kernel: device-mapper: uevent: version 1.0.3 Jan 20 07:00:48.600027 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 07:00:48.600047 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 07:00:48.600067 kernel: raid6: avx2x4 gen() 35491 MB/s Jan 20 07:00:48.600082 kernel: raid6: avx2x2 gen() 33464 MB/s Jan 20 07:00:48.600091 kernel: raid6: avx2x1 gen() 27104 MB/s Jan 20 07:00:48.600114 kernel: raid6: using algorithm avx2x4 gen() 35491 MB/s Jan 20 07:00:48.600123 kernel: raid6: .... xor() 4953 MB/s, rmw enabled Jan 20 07:00:48.600131 kernel: raid6: using avx2x2 recovery algorithm Jan 20 07:00:48.600140 kernel: xor: automatically using best checksumming function avx Jan 20 07:00:48.600149 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 07:00:48.600157 kernel: BTRFS: device fsid 98f50efd-4872-4dd8-af35-5e494490b9aa devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (167) Jan 20 07:00:48.600166 kernel: BTRFS info (device dm-0): first mount of filesystem 98f50efd-4872-4dd8-af35-5e494490b9aa Jan 20 07:00:48.600181 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 07:00:48.600190 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 20 07:00:48.600199 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 07:00:48.600207 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 07:00:48.600216 kernel: loop: module loaded Jan 20 07:00:48.600224 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 07:00:48.600233 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 07:00:48.600249 systemd[1]: Successfully made /usr/ read-only. Jan 20 07:00:48.600261 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 07:00:48.600271 systemd[1]: Detected virtualization kvm. Jan 20 07:00:48.600280 systemd[1]: Detected architecture x86-64. Jan 20 07:00:48.600288 systemd[1]: Running in initrd. Jan 20 07:00:48.600298 systemd[1]: No hostname configured, using default hostname. Jan 20 07:00:48.600314 systemd[1]: Hostname set to . Jan 20 07:00:48.600323 systemd[1]: Initializing machine ID from random generator. Jan 20 07:00:48.600332 systemd[1]: Queued start job for default target initrd.target. Jan 20 07:00:48.600341 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 07:00:48.600350 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 07:00:48.600360 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 07:00:48.600376 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 07:00:48.600386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 07:00:48.600396 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 07:00:48.600405 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 07:00:48.600414 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 07:00:48.600423 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 07:00:48.600439 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 07:00:48.600448 systemd[1]: Reached target paths.target - Path Units. Jan 20 07:00:48.600457 systemd[1]: Reached target slices.target - Slice Units. Jan 20 07:00:48.600466 systemd[1]: Reached target swap.target - Swaps. Jan 20 07:00:48.600475 systemd[1]: Reached target timers.target - Timer Units. Jan 20 07:00:48.600484 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 07:00:48.600493 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 07:00:48.600509 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 07:00:48.600518 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 07:00:48.600527 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 07:00:48.600536 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 07:00:48.600545 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 07:00:48.600611 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 07:00:48.600620 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 07:00:48.600640 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 07:00:48.600649 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 07:00:48.600659 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 07:00:48.600668 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 07:00:48.600677 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 07:00:48.600686 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 07:00:48.600703 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 07:00:48.600712 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 07:00:48.600722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 07:00:48.600731 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 07:00:48.600747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 07:00:48.600756 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 07:00:48.600766 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 07:00:48.600810 systemd-journald[303]: Collecting audit messages is enabled. Jan 20 07:00:48.600841 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 07:00:48.600850 kernel: Bridge firewalling registered Jan 20 07:00:48.600859 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 07:00:48.600869 systemd-journald[303]: Journal started Jan 20 07:00:48.600896 systemd-journald[303]: Runtime Journal (/run/log/journal/e07787bdda8c479292d3f5daa1f7a0c8) is 8M, max 78.1M, 70.1M free. Jan 20 07:00:48.592675 systemd-modules-load[305]: Inserted module 'br_netfilter' Jan 20 07:00:48.605603 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 07:00:48.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.614620 kernel: audit: type=1130 audit(1768892448.602:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.727591 kernel: audit: type=1130 audit(1768892448.716:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.727827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 07:00:48.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.739125 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 07:00:48.750954 kernel: audit: type=1130 audit(1768892448.728:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.750985 kernel: audit: type=1130 audit(1768892448.739:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.749787 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 07:00:48.755868 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 07:00:48.767742 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 07:00:48.773679 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 07:00:48.782969 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 07:00:48.802779 kernel: audit: type=1130 audit(1768892448.784:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.802823 kernel: audit: type=1334 audit(1768892448.785:7): prog-id=6 op=LOAD Jan 20 07:00:48.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.785000 audit: BPF prog-id=6 op=LOAD Jan 20 07:00:48.808396 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 07:00:48.816640 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 07:00:48.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.831898 kernel: audit: type=1130 audit(1768892448.816:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.841965 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 07:00:48.846704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 07:00:48.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.858650 systemd-tmpfiles[324]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 07:00:48.863760 kernel: audit: type=1130 audit(1768892448.848:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.875760 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 07:00:48.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.887612 kernel: audit: type=1130 audit(1768892448.878:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:48.888579 dracut-cmdline[339]: dracut-109 Jan 20 07:00:48.895748 dracut-cmdline[339]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 07:00:48.942334 systemd-resolved[335]: Positive Trust Anchors: Jan 20 07:00:48.943613 systemd-resolved[335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 07:00:48.944271 systemd-resolved[335]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 07:00:48.944301 systemd-resolved[335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 07:00:49.021353 systemd-resolved[335]: Defaulting to hostname 'linux'. Jan 20 07:00:49.024540 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 07:00:49.026872 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 07:00:49.037607 kernel: audit: type=1130 audit(1768892449.026:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:49.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:49.113607 kernel: Loading iSCSI transport class v2.0-870. Jan 20 07:00:49.138585 kernel: iscsi: registered transport (tcp) Jan 20 07:00:49.173682 kernel: iscsi: registered transport (qla4xxx) Jan 20 07:00:49.173766 kernel: QLogic iSCSI HBA Driver Jan 20 07:00:49.211496 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 07:00:49.271589 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 07:00:49.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:49.274526 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 07:00:49.343710 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 07:00:49.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:49.347351 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 07:00:49.349108 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 07:00:49.422232 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 07:00:49.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:49.428000 audit: BPF prog-id=7 op=LOAD Jan 20 07:00:49.428000 audit: BPF prog-id=8 op=LOAD Jan 20 07:00:49.430698 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 07:00:49.473511 systemd-udevd[583]: Using default interface naming scheme 'v257'. Jan 20 07:00:49.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:49.512705 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 07:00:49.516781 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 07:00:49.522253 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 07:00:49.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:49.530706 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 07:00:49.528000 audit: BPF prog-id=9 op=LOAD Jan 20 07:00:49.580009 dracut-pre-trigger[668]: rd.md=0: removing MD RAID activation Jan 20 07:00:49.614918 systemd-networkd[675]: lo: Link UP Jan 20 07:00:49.615957 systemd-networkd[675]: lo: Gained carrier Jan 20 07:00:49.618398 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 07:00:49.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:49.620492 systemd[1]: Reached target network.target - Network. Jan 20 07:00:49.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:49.621439 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 07:00:49.625225 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 07:00:49.781954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 07:00:49.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:49.788754 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 07:00:49.966190 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 20 07:00:49.998695 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 20 07:00:50.051623 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 07:00:50.065593 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 07:00:50.073464 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 20 07:00:50.096192 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 20 07:00:50.276740 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 07:00:50.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:50.279635 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 07:00:50.279709 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 07:00:50.280582 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 07:00:50.356906 kernel: AES CTR mode by8 optimization enabled Jan 20 07:00:50.284355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 07:00:50.381961 disk-uuid[778]: Primary Header is updated. Jan 20 07:00:50.381961 disk-uuid[778]: Secondary Entries is updated. Jan 20 07:00:50.381961 disk-uuid[778]: Secondary Header is updated. Jan 20 07:00:50.403497 systemd-networkd[675]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 07:00:50.403519 systemd-networkd[675]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 07:00:50.407978 systemd-networkd[675]: eth0: Link UP Jan 20 07:00:50.408893 systemd-networkd[675]: eth0: Gained carrier Jan 20 07:00:50.408907 systemd-networkd[675]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 07:00:50.653882 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 07:00:50.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:50.717169 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 07:00:50.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:50.719463 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 07:00:50.720829 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 07:00:50.723074 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 07:00:50.727771 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 07:00:50.760249 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 07:00:50.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:51.433726 systemd-networkd[675]: eth0: DHCPv4 address 172.232.7.121/24, gateway 172.232.7.1 acquired from 23.192.120.231 Jan 20 07:00:51.570913 disk-uuid[801]: Warning: The kernel is still using the old partition table. Jan 20 07:00:51.570913 disk-uuid[801]: The new table will be used at the next reboot or after you Jan 20 07:00:51.570913 disk-uuid[801]: run partprobe(8) or kpartx(8) Jan 20 07:00:51.570913 disk-uuid[801]: The operation has completed successfully. Jan 20 07:00:51.579230 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 07:00:51.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:51.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:51.579395 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 07:00:51.582702 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 07:00:51.642612 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (844) Jan 20 07:00:51.646598 kernel: BTRFS info (device sda6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 07:00:51.646638 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 07:00:51.655670 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 20 07:00:51.655699 kernel: BTRFS info (device sda6): turning on async discard Jan 20 07:00:51.657939 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 07:00:51.669604 kernel: BTRFS info (device sda6): last unmount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 07:00:51.670352 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 07:00:51.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:51.673505 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 07:00:51.675831 systemd-networkd[675]: eth0: Gained IPv6LL Jan 20 07:00:52.377581 ignition[863]: Ignition 2.24.0 Jan 20 07:00:52.377596 ignition[863]: Stage: fetch-offline Jan 20 07:00:52.380899 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 07:00:52.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:52.377677 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jan 20 07:00:52.377694 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 20 07:00:52.383798 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 07:00:52.377827 ignition[863]: parsed url from cmdline: "" Jan 20 07:00:52.377832 ignition[863]: no config URL provided Jan 20 07:00:52.377839 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 07:00:52.377852 ignition[863]: no config at "/usr/lib/ignition/user.ign" Jan 20 07:00:52.377858 ignition[863]: failed to fetch config: resource requires networking Jan 20 07:00:52.378138 ignition[863]: Ignition finished successfully Jan 20 07:00:52.565576 ignition[873]: Ignition 2.24.0 Jan 20 07:00:52.565617 ignition[873]: Stage: fetch Jan 20 07:00:52.565924 ignition[873]: no configs at "/usr/lib/ignition/base.d" Jan 20 07:00:52.565950 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 20 07:00:52.566151 ignition[873]: parsed url from cmdline: "" Jan 20 07:00:52.566159 ignition[873]: no config URL provided Jan 20 07:00:52.566170 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 07:00:52.566194 ignition[873]: no config at "/usr/lib/ignition/user.ign" Jan 20 07:00:52.566257 ignition[873]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 20 07:00:52.669263 ignition[873]: PUT result: OK Jan 20 07:00:52.669327 ignition[873]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 20 07:00:52.786692 ignition[873]: GET result: OK Jan 20 07:00:52.788526 ignition[873]: parsing config with SHA512: f9d68ea7b4715cea444e1b03d4f5c4e91c453b8d419090a39797bcce62ce5c0fbc42f61aadc816c5950fd4a807ed5622cbf129fa99e0a4f9a9e6a24f5bcef103 Jan 20 07:00:52.799979 unknown[873]: fetched base config from "system" Jan 20 07:00:52.800524 ignition[873]: fetch: fetch complete Jan 20 07:00:52.799999 unknown[873]: fetched base config from "system" Jan 20 07:00:52.800534 ignition[873]: fetch: fetch passed Jan 20 07:00:52.800011 unknown[873]: fetched user config from "akamai" Jan 20 07:00:52.801954 ignition[873]: Ignition finished successfully Jan 20 07:00:52.807500 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 07:00:52.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:52.811815 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 07:00:52.900280 ignition[880]: Ignition 2.24.0 Jan 20 07:00:52.900308 ignition[880]: Stage: kargs Jan 20 07:00:52.902680 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jan 20 07:00:52.902709 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 20 07:00:52.903951 ignition[880]: kargs: kargs passed Jan 20 07:00:52.904022 ignition[880]: Ignition finished successfully Jan 20 07:00:52.914361 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 07:00:52.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:52.919273 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 07:00:52.969312 ignition[887]: Ignition 2.24.0 Jan 20 07:00:52.969341 ignition[887]: Stage: disks Jan 20 07:00:52.969649 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 20 07:00:52.969677 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 20 07:00:52.971131 ignition[887]: disks: disks passed Jan 20 07:00:52.971216 ignition[887]: Ignition finished successfully Jan 20 07:00:52.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:52.974167 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 07:00:52.976108 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 07:00:52.978089 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 07:00:52.980484 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 07:00:52.982504 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 07:00:52.984168 systemd[1]: Reached target basic.target - Basic System. Jan 20 07:00:52.988031 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 07:00:53.043036 systemd-fsck[895]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 20 07:00:53.047382 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 07:00:53.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:53.052852 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 07:00:53.247586 kernel: EXT4-fs (sda9): mounted filesystem cccfbfd8-bb77-4a2f-9af9-c87f4957b904 r/w with ordered data mode. Quota mode: none. Jan 20 07:00:53.248591 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 07:00:53.250069 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 07:00:53.263078 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 07:00:53.280379 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 07:00:53.285012 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 07:00:53.285086 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 07:00:53.285135 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 07:00:53.300640 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (903) Jan 20 07:00:53.305076 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 07:00:53.314204 kernel: BTRFS info (device sda6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 07:00:53.314282 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 07:00:53.318537 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 07:00:53.333361 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 20 07:00:53.333452 kernel: BTRFS info (device sda6): turning on async discard Jan 20 07:00:53.333474 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 07:00:53.343817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 07:00:53.835317 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 07:00:53.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:53.851477 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 20 07:00:53.851597 kernel: audit: type=1130 audit(1768892453.837:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:53.843790 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 07:00:53.864905 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 07:00:53.880825 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 07:00:53.886987 kernel: BTRFS info (device sda6): last unmount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 07:00:53.980472 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 07:00:53.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:53.992653 kernel: audit: type=1130 audit(1768892453.982:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:54.011503 ignition[1002]: INFO : Ignition 2.24.0 Jan 20 07:00:54.011503 ignition[1002]: INFO : Stage: mount Jan 20 07:00:54.014420 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 07:00:54.014420 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 20 07:00:54.014420 ignition[1002]: INFO : mount: mount passed Jan 20 07:00:54.014420 ignition[1002]: INFO : Ignition finished successfully Jan 20 07:00:54.032488 kernel: audit: type=1130 audit(1768892454.017:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:54.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:54.017379 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 07:00:54.021038 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 07:00:54.252531 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 07:00:54.327607 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1014) Jan 20 07:00:54.349061 kernel: BTRFS info (device sda6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 07:00:54.349158 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 07:00:54.372918 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 20 07:00:54.373010 kernel: BTRFS info (device sda6): turning on async discard Jan 20 07:00:54.373070 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 07:00:54.380082 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 07:00:54.422886 ignition[1031]: INFO : Ignition 2.24.0 Jan 20 07:00:54.422886 ignition[1031]: INFO : Stage: files Jan 20 07:00:54.426592 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 07:00:54.426592 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 20 07:00:54.426592 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Jan 20 07:00:54.431498 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 07:00:54.431498 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 07:00:54.452971 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 07:00:54.454547 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 07:00:54.456284 unknown[1031]: wrote ssh authorized keys file for user: core Jan 20 07:00:54.458820 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 07:00:54.462256 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 07:00:54.464133 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 07:00:54.864607 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 07:00:55.183902 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 07:00:55.186954 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 07:00:55.186954 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 07:00:55.186954 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 07:00:55.214699 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 07:00:55.214699 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 07:00:55.214699 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 07:00:55.214699 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 07:00:55.214699 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 07:00:55.242687 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 07:00:55.242687 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 07:00:55.242687 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 07:00:55.255066 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 07:00:55.255066 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 07:00:55.255066 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 07:00:55.795940 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 07:00:58.618655 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 07:00:58.618655 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 07:00:58.624866 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 07:00:58.629071 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 07:00:58.629071 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 07:00:58.629071 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 07:00:58.634061 ignition[1031]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 20 07:00:58.634061 ignition[1031]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 20 07:00:58.634061 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 07:00:58.634061 ignition[1031]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 20 07:00:58.634061 ignition[1031]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 07:00:58.634061 ignition[1031]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 07:00:58.634061 ignition[1031]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 07:00:58.634061 ignition[1031]: INFO : files: files passed Jan 20 07:00:58.634061 ignition[1031]: INFO : Ignition finished successfully Jan 20 07:00:58.660882 kernel: audit: type=1130 audit(1768892458.640:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.636111 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 07:00:58.646795 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 07:00:58.654035 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 07:00:58.675634 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 07:00:58.675822 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 07:00:58.695847 kernel: audit: type=1130 audit(1768892458.678:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.695883 kernel: audit: type=1131 audit(1768892458.678:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.703432 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 07:00:58.705017 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 07:00:58.706817 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 07:00:58.707911 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 07:00:58.718529 kernel: audit: type=1130 audit(1768892458.708:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.710028 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 07:00:58.720815 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 07:00:58.781759 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 07:00:58.781969 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 07:00:58.801038 kernel: audit: type=1130 audit(1768892458.783:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.801094 kernel: audit: type=1131 audit(1768892458.783:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.784101 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 07:00:58.801762 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 07:00:58.804847 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 07:00:58.806746 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 07:00:58.847524 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 07:00:58.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.856582 kernel: audit: type=1130 audit(1768892458.848:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.858098 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 07:00:58.883623 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 07:00:58.883969 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 07:00:58.885056 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 07:00:58.886948 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 07:00:58.898209 kernel: audit: type=1131 audit(1768892458.890:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.888735 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 07:00:58.888993 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 07:00:58.898002 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 07:00:58.899184 systemd[1]: Stopped target basic.target - Basic System. Jan 20 07:00:58.900772 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 07:00:58.902607 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 07:00:58.904218 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 07:00:58.905819 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 07:00:58.907520 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 07:00:58.909251 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 07:00:58.911046 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 07:00:58.912817 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 07:00:58.925671 kernel: audit: type=1131 audit(1768892458.917:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.914522 systemd[1]: Stopped target swap.target - Swaps. Jan 20 07:00:58.916176 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 07:00:58.916431 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 07:00:58.925361 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 07:00:58.946177 kernel: audit: type=1131 audit(1768892458.938:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.926503 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 07:00:58.956098 kernel: audit: type=1131 audit(1768892458.946:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.928063 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 07:00:58.965023 kernel: audit: type=1131 audit(1768892458.956:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.928243 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 07:00:58.936840 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 07:00:58.936982 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 07:00:58.945900 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 07:00:58.946109 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 07:00:58.947118 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 07:00:58.947245 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 07:00:58.971694 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 07:00:58.976779 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 07:00:58.978300 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 07:00:58.979819 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 07:00:58.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.984931 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 07:00:58.992145 kernel: audit: type=1131 audit(1768892458.981:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.985098 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 07:00:58.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.993300 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 07:00:59.002587 kernel: audit: type=1131 audit(1768892458.992:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.002619 kernel: audit: type=1131 audit(1768892459.001:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:58.993432 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 07:00:59.021273 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 07:00:59.021397 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 07:00:59.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.033569 kernel: audit: type=1130 audit(1768892459.024:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.046312 ignition[1087]: INFO : Ignition 2.24.0 Jan 20 07:00:59.047621 ignition[1087]: INFO : Stage: umount Jan 20 07:00:59.048304 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 07:00:59.048304 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 20 07:00:59.052023 ignition[1087]: INFO : umount: umount passed Jan 20 07:00:59.054239 ignition[1087]: INFO : Ignition finished successfully Jan 20 07:00:59.054827 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 07:00:59.055655 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 07:00:59.055842 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 07:00:59.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.058539 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 07:00:59.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.058950 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 07:00:59.060078 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 07:00:59.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.060149 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 07:00:59.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.063513 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 07:00:59.063596 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 07:00:59.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.065674 systemd[1]: Stopped target network.target - Network. Jan 20 07:00:59.067958 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 07:00:59.068038 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 07:00:59.069924 systemd[1]: Stopped target paths.target - Path Units. Jan 20 07:00:59.071570 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 07:00:59.075677 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 07:00:59.076983 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 07:00:59.079029 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 07:00:59.080786 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 07:00:59.080869 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 07:00:59.082172 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 07:00:59.082232 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 07:00:59.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.083620 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 07:00:59.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.083661 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 07:00:59.084985 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 07:00:59.085061 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 07:00:59.086754 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 07:00:59.086824 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 07:00:59.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.089137 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 07:00:59.091641 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 07:00:59.095345 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 07:00:59.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.095469 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 07:00:59.098287 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 07:00:59.098393 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 07:00:59.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.105819 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 07:00:59.106053 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 07:00:59.113511 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 07:00:59.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.113968 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 07:00:59.116000 audit: BPF prog-id=9 op=UNLOAD Jan 20 07:00:59.118176 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 07:00:59.118000 audit: BPF prog-id=6 op=UNLOAD Jan 20 07:00:59.119129 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 07:00:59.119394 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 07:00:59.122661 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 07:00:59.123384 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 07:00:59.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.123466 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 07:00:59.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.127060 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 07:00:59.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.127129 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 07:00:59.130064 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 07:00:59.130136 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 07:00:59.133130 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 07:00:59.147871 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 07:00:59.148957 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 07:00:59.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.150679 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 07:00:59.150806 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 07:00:59.162479 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 07:00:59.162538 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 07:00:59.164601 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 07:00:59.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.164673 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 07:00:59.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.166998 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 07:00:59.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.167067 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 07:00:59.168745 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 07:00:59.168818 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 07:00:59.172609 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 07:00:59.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.173390 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 07:00:59.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.173468 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 07:00:59.176099 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 07:00:59.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.176167 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 07:00:59.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.176976 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 07:00:59.177066 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 07:00:59.179378 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 07:00:59.179444 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 07:00:59.182075 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 07:00:59.182141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 07:00:59.202955 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 07:00:59.204528 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 07:00:59.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.207122 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 07:00:59.208267 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 07:00:59.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:00:59.210232 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 07:00:59.212060 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 07:00:59.238099 systemd[1]: Switching root. Jan 20 07:00:59.281325 systemd-journald[303]: Journal stopped Jan 20 07:01:01.197677 systemd-journald[303]: Received SIGTERM from PID 1 (systemd). Jan 20 07:01:01.197723 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 07:01:01.197740 kernel: SELinux: policy capability open_perms=1 Jan 20 07:01:01.197754 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 07:01:01.197770 kernel: SELinux: policy capability always_check_network=0 Jan 20 07:01:01.197791 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 07:01:01.197806 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 07:01:01.197822 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 07:01:01.197834 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 07:01:01.197849 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 07:01:01.197863 systemd[1]: Successfully loaded SELinux policy in 94.621ms. Jan 20 07:01:01.197885 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.236ms. Jan 20 07:01:01.197902 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 07:01:01.197920 systemd[1]: Detected virtualization kvm. Jan 20 07:01:01.197940 systemd[1]: Detected architecture x86-64. Jan 20 07:01:01.197953 systemd[1]: Detected first boot. Jan 20 07:01:01.197971 systemd[1]: Initializing machine ID from random generator. Jan 20 07:01:01.197989 zram_generator::config[1130]: No configuration found. Jan 20 07:01:01.198009 kernel: Guest personality initialized and is inactive Jan 20 07:01:01.198025 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 07:01:01.198044 kernel: Initialized host personality Jan 20 07:01:01.198059 kernel: NET: Registered PF_VSOCK protocol family Jan 20 07:01:01.198077 systemd[1]: Populated /etc with preset unit settings. Jan 20 07:01:01.198093 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 07:01:01.198106 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 07:01:01.198123 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 07:01:01.198149 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 07:01:01.198172 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 07:01:01.198190 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 07:01:01.198209 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 07:01:01.198225 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 07:01:01.198243 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 07:01:01.198259 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 07:01:01.198280 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 07:01:01.198297 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 07:01:01.198316 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 07:01:01.198334 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 07:01:01.198352 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 07:01:01.198367 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 07:01:01.198385 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 07:01:01.198402 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 07:01:01.198415 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 07:01:01.198427 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 07:01:01.198440 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 07:01:01.198452 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 07:01:01.198468 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 07:01:01.198483 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 07:01:01.198502 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 07:01:01.198518 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 07:01:01.198537 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 07:01:01.198571 systemd[1]: Reached target slices.target - Slice Units. Jan 20 07:01:01.198590 systemd[1]: Reached target swap.target - Swaps. Jan 20 07:01:01.198614 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 07:01:01.198630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 07:01:01.198648 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 07:01:01.198667 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 07:01:01.198689 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 07:01:01.198707 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 07:01:01.198723 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 07:01:01.198738 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 07:01:01.198757 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 07:01:01.198776 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 07:01:01.198798 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 07:01:01.198816 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 07:01:01.198834 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 07:01:01.198852 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 07:01:01.198871 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 07:01:01.198888 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 07:01:01.198905 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 07:01:01.198928 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 07:01:01.198947 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 07:01:01.198963 systemd[1]: Reached target machines.target - Containers. Jan 20 07:01:01.198980 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 07:01:01.199011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 07:01:01.199026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 07:01:01.199042 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 07:01:01.199065 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 07:01:01.199081 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 07:01:01.199099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 07:01:01.199111 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 07:01:01.199128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 07:01:01.199146 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 07:01:01.199169 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 07:01:01.199188 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 07:01:01.199202 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 07:01:01.199215 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 07:01:01.199234 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 07:01:01.199252 kernel: fuse: init (API version 7.41) Jan 20 07:01:01.199266 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 07:01:01.199305 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 07:01:01.199325 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 07:01:01.199344 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 07:01:01.199358 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 07:01:01.199377 kernel: ACPI: bus type drm_connector registered Jan 20 07:01:01.199395 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 07:01:01.199417 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 07:01:01.199433 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 07:01:01.199451 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 07:01:01.199469 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 07:01:01.199487 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 07:01:01.199505 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 07:01:01.199520 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 07:01:01.199541 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 07:01:01.199600 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 07:01:01.199653 systemd-journald[1215]: Collecting audit messages is enabled. Jan 20 07:01:01.199692 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 07:01:01.199712 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 07:01:01.199730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 07:01:01.199747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 07:01:01.199764 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 07:01:01.199782 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 07:01:01.199800 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 07:01:01.199822 systemd-journald[1215]: Journal started Jan 20 07:01:01.199860 systemd-journald[1215]: Runtime Journal (/run/log/journal/abd30632d5c449a897ebf547d8c72ee1) is 8M, max 78.1M, 70.1M free. Jan 20 07:01:00.671000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 07:01:00.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:00.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:00.973000 audit: BPF prog-id=14 op=UNLOAD Jan 20 07:01:00.973000 audit: BPF prog-id=13 op=UNLOAD Jan 20 07:01:00.975000 audit: BPF prog-id=15 op=LOAD Jan 20 07:01:00.997000 audit: BPF prog-id=16 op=LOAD Jan 20 07:01:00.998000 audit: BPF prog-id=17 op=LOAD Jan 20 07:01:01.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.165000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 07:01:01.165000 audit[1215]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff21857530 a2=4000 a3=0 items=0 ppid=1 pid=1215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:01.165000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 07:01:01.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:00.523064 systemd[1]: Queued start job for default target multi-user.target. Jan 20 07:01:00.538350 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 20 07:01:00.539389 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 07:01:01.204596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 07:01:01.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.227595 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 07:01:01.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.212483 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 07:01:01.212795 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 07:01:01.214052 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 07:01:01.216609 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 07:01:01.217857 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 07:01:01.228001 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 07:01:01.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.230817 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 07:01:01.240166 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 07:01:01.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.249245 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 07:01:01.251066 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 07:01:01.251873 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 07:01:01.251899 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 07:01:01.253942 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 07:01:01.254992 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 07:01:01.255199 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 07:01:01.260724 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 07:01:01.262739 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 07:01:01.265650 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 07:01:01.284051 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 07:01:01.286091 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 07:01:01.289854 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 07:01:01.316305 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 07:01:01.321726 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 07:01:01.330762 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 07:01:01.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.333500 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 07:01:01.340100 systemd-journald[1215]: Time spent on flushing to /var/log/journal/abd30632d5c449a897ebf547d8c72ee1 is 127.786ms for 1130 entries. Jan 20 07:01:01.340100 systemd-journald[1215]: System Journal (/var/log/journal/abd30632d5c449a897ebf547d8c72ee1) is 8M, max 588.1M, 580.1M free. Jan 20 07:01:01.725127 systemd-journald[1215]: Received client request to flush runtime journal. Jan 20 07:01:01.725176 kernel: loop1: detected capacity change from 0 to 229808 Jan 20 07:01:01.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.344800 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 07:01:01.730920 kernel: loop2: detected capacity change from 0 to 50784 Jan 20 07:01:01.546772 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 07:01:01.554750 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 07:01:01.599814 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 07:01:01.634354 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 07:01:01.649915 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 07:01:01.655180 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 07:01:01.658445 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 07:01:01.663309 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 07:01:01.685527 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 20 07:01:01.685541 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 20 07:01:01.712446 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 07:01:01.723068 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 07:01:01.737190 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 07:01:01.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.788607 kernel: loop3: detected capacity change from 0 to 111560 Jan 20 07:01:01.823356 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 07:01:01.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:01.826000 audit: BPF prog-id=18 op=LOAD Jan 20 07:01:01.826000 audit: BPF prog-id=19 op=LOAD Jan 20 07:01:01.827000 audit: BPF prog-id=20 op=LOAD Jan 20 07:01:01.836611 kernel: loop4: detected capacity change from 0 to 8 Jan 20 07:01:01.832456 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 07:01:01.837000 audit: BPF prog-id=21 op=LOAD Jan 20 07:01:01.841891 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 07:01:01.848254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 07:01:01.869663 kernel: loop5: detected capacity change from 0 to 229808 Jan 20 07:01:01.871000 audit: BPF prog-id=22 op=LOAD Jan 20 07:01:01.871000 audit: BPF prog-id=23 op=LOAD Jan 20 07:01:01.871000 audit: BPF prog-id=24 op=LOAD Jan 20 07:01:01.875781 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 07:01:01.897000 audit: BPF prog-id=25 op=LOAD Jan 20 07:01:01.897000 audit: BPF prog-id=26 op=LOAD Jan 20 07:01:01.897000 audit: BPF prog-id=27 op=LOAD Jan 20 07:01:01.923294 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 07:01:02.020591 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Jan 20 07:01:02.020605 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Jan 20 07:01:02.033778 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 07:01:02.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:02.050584 kernel: loop6: detected capacity change from 0 to 50784 Jan 20 07:01:02.095950 kernel: loop7: detected capacity change from 0 to 111560 Jan 20 07:01:02.108701 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 07:01:02.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:02.117769 systemd-nsresourced[1286]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 07:01:02.119687 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 07:01:02.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:02.132020 kernel: loop1: detected capacity change from 0 to 8 Jan 20 07:01:02.141321 (sd-merge)[1284]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Jan 20 07:01:02.159417 (sd-merge)[1284]: Merged extensions into '/usr'. Jan 20 07:01:02.192826 systemd[1]: Reload requested from client PID 1254 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 07:01:02.192903 systemd[1]: Reloading... Jan 20 07:01:02.545580 zram_generator::config[1331]: No configuration found. Jan 20 07:01:02.674499 systemd-resolved[1282]: Positive Trust Anchors: Jan 20 07:01:02.675054 systemd-resolved[1282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 07:01:02.675116 systemd-resolved[1282]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 07:01:02.675193 systemd-resolved[1282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 07:01:02.676452 systemd-oomd[1279]: No swap; memory pressure usage will be degraded Jan 20 07:01:02.696642 systemd-resolved[1282]: Defaulting to hostname 'linux'. Jan 20 07:01:02.944045 systemd[1]: Reloading finished in 749 ms. Jan 20 07:01:02.971803 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 07:01:02.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:02.973011 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 07:01:02.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:02.974233 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 07:01:02.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:02.975441 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 07:01:02.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:02.981114 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 07:01:02.986664 systemd[1]: Starting ensure-sysext.service... Jan 20 07:01:02.991000 audit: BPF prog-id=8 op=UNLOAD Jan 20 07:01:02.991000 audit: BPF prog-id=7 op=UNLOAD Jan 20 07:01:02.991000 audit: BPF prog-id=28 op=LOAD Jan 20 07:01:02.991000 audit: BPF prog-id=29 op=LOAD Jan 20 07:01:02.990724 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 07:01:02.993939 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 07:01:02.998000 audit: BPF prog-id=30 op=LOAD Jan 20 07:01:02.999000 audit: BPF prog-id=21 op=UNLOAD Jan 20 07:01:03.002000 audit: BPF prog-id=31 op=LOAD Jan 20 07:01:03.002000 audit: BPF prog-id=22 op=UNLOAD Jan 20 07:01:03.003000 audit: BPF prog-id=32 op=LOAD Jan 20 07:01:03.003000 audit: BPF prog-id=33 op=LOAD Jan 20 07:01:03.003000 audit: BPF prog-id=23 op=UNLOAD Jan 20 07:01:03.003000 audit: BPF prog-id=24 op=UNLOAD Jan 20 07:01:03.005000 audit: BPF prog-id=34 op=LOAD Jan 20 07:01:03.007000 audit: BPF prog-id=18 op=UNLOAD Jan 20 07:01:03.007000 audit: BPF prog-id=35 op=LOAD Jan 20 07:01:03.007000 audit: BPF prog-id=36 op=LOAD Jan 20 07:01:03.007000 audit: BPF prog-id=19 op=UNLOAD Jan 20 07:01:03.007000 audit: BPF prog-id=20 op=UNLOAD Jan 20 07:01:03.008000 audit: BPF prog-id=37 op=LOAD Jan 20 07:01:03.013000 audit: BPF prog-id=15 op=UNLOAD Jan 20 07:01:03.013000 audit: BPF prog-id=38 op=LOAD Jan 20 07:01:03.013000 audit: BPF prog-id=39 op=LOAD Jan 20 07:01:03.013000 audit: BPF prog-id=16 op=UNLOAD Jan 20 07:01:03.013000 audit: BPF prog-id=17 op=UNLOAD Jan 20 07:01:03.014000 audit: BPF prog-id=40 op=LOAD Jan 20 07:01:03.014000 audit: BPF prog-id=25 op=UNLOAD Jan 20 07:01:03.014000 audit: BPF prog-id=41 op=LOAD Jan 20 07:01:03.014000 audit: BPF prog-id=42 op=LOAD Jan 20 07:01:03.014000 audit: BPF prog-id=26 op=UNLOAD Jan 20 07:01:03.014000 audit: BPF prog-id=27 op=UNLOAD Jan 20 07:01:03.030728 systemd[1]: Reload requested from client PID 1374 ('systemctl') (unit ensure-sysext.service)... Jan 20 07:01:03.030747 systemd[1]: Reloading... Jan 20 07:01:03.040255 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 07:01:03.042438 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 07:01:03.044194 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 07:01:03.048108 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Jan 20 07:01:03.048203 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Jan 20 07:01:03.069005 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 07:01:03.069025 systemd-tmpfiles[1375]: Skipping /boot Jan 20 07:01:03.094144 systemd-udevd[1376]: Using default interface naming scheme 'v257'. Jan 20 07:01:03.164194 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 07:01:03.164227 systemd-tmpfiles[1375]: Skipping /boot Jan 20 07:01:03.202598 zram_generator::config[1412]: No configuration found. Jan 20 07:01:03.479906 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 07:01:03.480522 systemd[1]: Reloading finished in 449 ms. Jan 20 07:01:03.491854 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 07:01:03.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.495461 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 07:01:03.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.501711 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 07:01:03.505000 audit: BPF prog-id=43 op=LOAD Jan 20 07:01:03.507000 audit: BPF prog-id=31 op=UNLOAD Jan 20 07:01:03.507000 audit: BPF prog-id=44 op=LOAD Jan 20 07:01:03.508000 audit: BPF prog-id=45 op=LOAD Jan 20 07:01:03.508000 audit: BPF prog-id=32 op=UNLOAD Jan 20 07:01:03.508000 audit: BPF prog-id=33 op=UNLOAD Jan 20 07:01:03.512000 audit: BPF prog-id=46 op=LOAD Jan 20 07:01:03.512000 audit: BPF prog-id=30 op=UNLOAD Jan 20 07:01:03.513000 audit: BPF prog-id=47 op=LOAD Jan 20 07:01:03.517576 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 07:01:03.519000 audit: BPF prog-id=37 op=UNLOAD Jan 20 07:01:03.519000 audit: BPF prog-id=48 op=LOAD Jan 20 07:01:03.519000 audit: BPF prog-id=49 op=LOAD Jan 20 07:01:03.519000 audit: BPF prog-id=38 op=UNLOAD Jan 20 07:01:03.519000 audit: BPF prog-id=39 op=UNLOAD Jan 20 07:01:03.525000 audit: BPF prog-id=50 op=LOAD Jan 20 07:01:03.526000 audit: BPF prog-id=34 op=UNLOAD Jan 20 07:01:03.526000 audit: BPF prog-id=51 op=LOAD Jan 20 07:01:03.527000 audit: BPF prog-id=52 op=LOAD Jan 20 07:01:03.527000 audit: BPF prog-id=35 op=UNLOAD Jan 20 07:01:03.527000 audit: BPF prog-id=36 op=UNLOAD Jan 20 07:01:03.527000 audit: BPF prog-id=53 op=LOAD Jan 20 07:01:03.529000 audit: BPF prog-id=40 op=UNLOAD Jan 20 07:01:03.529000 audit: BPF prog-id=54 op=LOAD Jan 20 07:01:03.529000 audit: BPF prog-id=55 op=LOAD Jan 20 07:01:03.529000 audit: BPF prog-id=41 op=UNLOAD Jan 20 07:01:03.529000 audit: BPF prog-id=42 op=UNLOAD Jan 20 07:01:03.529000 audit: BPF prog-id=56 op=LOAD Jan 20 07:01:03.529000 audit: BPF prog-id=57 op=LOAD Jan 20 07:01:03.531000 audit: BPF prog-id=28 op=UNLOAD Jan 20 07:01:03.531000 audit: BPF prog-id=29 op=UNLOAD Jan 20 07:01:03.544601 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 07:01:03.549706 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 07:01:03.559673 kernel: ACPI: button: Power Button [PWRF] Jan 20 07:01:03.568709 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 07:01:03.572630 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 07:01:03.576632 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 07:01:03.578763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 07:01:03.581249 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 07:01:03.585426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 07:01:03.611983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 07:01:03.615861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 07:01:03.616332 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 07:01:03.638690 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 07:01:03.639694 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 07:01:03.644999 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 07:01:03.647000 audit: BPF prog-id=58 op=LOAD Jan 20 07:01:03.650860 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 07:01:03.657124 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 07:01:03.659639 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 07:01:03.758899 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 07:01:03.759146 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 07:01:03.833642 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 07:01:03.835490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 07:01:03.835760 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 07:01:03.835864 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 07:01:03.835989 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 07:01:03.840701 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 07:01:03.841036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 07:01:03.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.853728 systemd[1]: Finished ensure-sysext.service. Jan 20 07:01:03.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.857488 kernel: kauditd_printk_skb: 164 callbacks suppressed Jan 20 07:01:03.857591 kernel: audit: type=1130 audit(1768892463.855:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.866312 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 07:01:03.866651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 07:01:03.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.892683 kernel: audit: type=1130 audit(1768892463.866:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.892834 kernel: audit: type=1131 audit(1768892463.866:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.886129 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 07:01:03.899000 audit: BPF prog-id=59 op=LOAD Jan 20 07:01:03.902580 kernel: audit: type=1334 audit(1768892463.899:219): prog-id=59 op=LOAD Jan 20 07:01:03.900000 audit[1499]: SYSTEM_BOOT pid=1499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.906792 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 07:01:03.911644 kernel: audit: type=1127 audit(1768892463.900:220): pid=1499 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.960249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 07:01:03.960858 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 07:01:03.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.969907 kernel: audit: type=1130 audit(1768892463.962:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.969131 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 07:01:03.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.986595 kernel: audit: type=1131 audit(1768892463.962:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:03.995069 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 07:01:03.999136 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 07:01:04.026597 kernel: audit: type=1130 audit(1768892463.970:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:04.026706 kernel: audit: type=1130 audit(1768892464.016:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:04.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:04.018184 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 07:01:04.018516 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 07:01:04.026278 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 07:01:04.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:04.040573 kernel: audit: type=1130 audit(1768892464.024:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:04.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:04.092052 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 07:01:04.093000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 07:01:04.093000 audit[1536]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9d17cfc0 a2=420 a3=0 items=0 ppid=1488 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:04.093000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 07:01:04.094871 augenrules[1536]: No rules Jan 20 07:01:04.106645 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 07:01:04.108224 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 07:01:04.108661 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 07:01:04.119581 kernel: EDAC MC: Ver: 3.0.0 Jan 20 07:01:04.288413 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 20 07:01:04.291360 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 07:01:04.337823 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 07:01:04.467785 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 07:01:04.468847 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 07:01:04.532821 systemd-networkd[1498]: lo: Link UP Jan 20 07:01:04.532838 systemd-networkd[1498]: lo: Gained carrier Jan 20 07:01:04.542798 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 07:01:04.543006 systemd-networkd[1498]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 07:01:04.543013 systemd-networkd[1498]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 07:01:04.555382 systemd-networkd[1498]: eth0: Link UP Jan 20 07:01:04.555750 systemd-networkd[1498]: eth0: Gained carrier Jan 20 07:01:04.555773 systemd-networkd[1498]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 07:01:04.606788 systemd[1]: Reached target network.target - Network. Jan 20 07:01:04.611083 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 07:01:04.616748 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 07:01:04.620248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 07:01:04.658929 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 07:01:05.302672 ldconfig[1495]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 07:01:05.308119 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 07:01:05.311680 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 07:01:05.356953 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 07:01:05.358150 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 07:01:05.359094 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 07:01:05.360073 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 07:01:05.360943 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 07:01:05.362226 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 07:01:05.363161 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 07:01:05.364030 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 07:01:05.364974 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 07:01:05.365779 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 07:01:05.366621 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 07:01:05.366672 systemd[1]: Reached target paths.target - Path Units. Jan 20 07:01:05.367415 systemd[1]: Reached target timers.target - Timer Units. Jan 20 07:01:05.369588 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 07:01:05.372352 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 07:01:05.394591 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 07:01:05.395690 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 07:01:05.396478 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 07:01:05.408629 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 07:01:05.409994 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 07:01:05.411655 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 07:01:05.413400 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 07:01:05.414240 systemd[1]: Reached target basic.target - Basic System. Jan 20 07:01:05.415135 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 07:01:05.415171 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 07:01:05.416518 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 07:01:05.422129 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 07:01:05.435857 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 07:01:05.442843 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 07:01:05.451821 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 07:01:05.470734 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 07:01:05.473753 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 07:01:05.482636 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 07:01:05.487619 jq[1567]: false Jan 20 07:01:05.488647 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 07:01:05.555862 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 07:01:05.563860 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 07:01:05.574965 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 07:01:05.603958 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 07:01:05.604854 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 07:01:05.637680 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 07:01:05.639704 systemd-networkd[1498]: eth0: Gained IPv6LL Jan 20 07:01:05.652049 systemd-timesyncd[1517]: Network configuration changed, trying to establish connection. Jan 20 07:01:05.654307 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jan 20 07:01:05.656834 oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jan 20 07:01:05.667585 extend-filesystems[1568]: Found /dev/sda6 Jan 20 07:01:05.666298 oslogin_cache_refresh[1569]: Failure getting users, quitting Jan 20 07:01:05.669591 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting Jan 20 07:01:05.669591 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 07:01:05.669591 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache Jan 20 07:01:05.669591 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting Jan 20 07:01:05.669591 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 07:01:05.666329 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 07:01:05.666444 oslogin_cache_refresh[1569]: Refreshing group entry cache Jan 20 07:01:05.667153 oslogin_cache_refresh[1569]: Failure getting groups, quitting Jan 20 07:01:05.667175 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 07:01:05.677032 extend-filesystems[1568]: Found /dev/sda9 Jan 20 07:01:05.684434 extend-filesystems[1568]: Checking size of /dev/sda9 Jan 20 07:01:05.685223 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 07:01:05.705165 coreos-metadata[1564]: Jan 20 07:01:05.705 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 20 07:01:05.749107 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 07:01:05.787765 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 07:01:05.790590 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 07:01:05.791650 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 07:01:05.792439 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 07:01:05.793736 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 07:01:05.794718 jq[1592]: true Jan 20 07:01:05.812533 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 07:01:05.814130 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 07:01:05.858949 extend-filesystems[1568]: Resized partition /dev/sda9 Jan 20 07:01:05.861674 update_engine[1586]: I20260120 07:01:05.857601 1586 main.cc:92] Flatcar Update Engine starting Jan 20 07:01:05.855632 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 07:01:05.856641 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 07:01:05.865133 extend-filesystems[1615]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 07:01:05.876583 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Jan 20 07:01:05.876681 jq[1601]: true Jan 20 07:01:06.058711 tar[1600]: linux-amd64/LICENSE Jan 20 07:01:06.058711 tar[1600]: linux-amd64/helm Jan 20 07:01:06.151688 dbus-daemon[1565]: [system] SELinux support is enabled Jan 20 07:01:06.152082 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 07:01:06.155980 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 07:01:06.156023 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 07:01:06.157792 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 07:01:06.157829 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 07:01:06.178443 systemd[1]: Started update-engine.service - Update Engine. Jan 20 07:01:06.201156 update_engine[1586]: I20260120 07:01:06.179497 1586 update_check_scheduler.cc:74] Next update check in 12m0s Jan 20 07:01:06.185872 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 07:01:06.225757 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 07:01:06.225817 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 07:01:06.226381 systemd-logind[1577]: New seat seat0. Jan 20 07:01:06.227966 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 07:01:06.316861 bash[1637]: Updated "/home/core/.ssh/authorized_keys" Jan 20 07:01:06.318163 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 07:01:06.354041 systemd[1]: Starting sshkeys.service... Jan 20 07:01:06.427400 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 20 07:01:06.449304 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 20 07:01:06.767543 coreos-metadata[1564]: Jan 20 07:01:06.760 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 20 07:01:07.001194 coreos-metadata[1641]: Jan 20 07:01:07.000 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 20 07:01:07.018048 systemd-networkd[1498]: eth0: DHCPv4 address 172.232.7.121/24, gateway 172.232.7.1 acquired from 23.192.120.231 Jan 20 07:01:07.018445 dbus-daemon[1565]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1498 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 20 07:01:07.034605 systemd-timesyncd[1517]: Network configuration changed, trying to establish connection. Jan 20 07:01:07.034922 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 20 07:01:07.035209 systemd-timesyncd[1517]: Network configuration changed, trying to establish connection. Jan 20 07:01:07.038571 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 07:01:07.045414 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 07:01:07.065893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 07:01:07.072197 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 07:01:07.256943 sshd_keygen[1614]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 07:01:07.330798 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Jan 20 07:01:07.458363 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 07:01:07.462185 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 07:01:07.469372 extend-filesystems[1615]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 20 07:01:07.469372 extend-filesystems[1615]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 20 07:01:07.469372 extend-filesystems[1615]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Jan 20 07:01:07.473654 extend-filesystems[1568]: Resized filesystem in /dev/sda9 Jan 20 07:01:07.472093 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 07:01:07.472648 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 07:01:07.577038 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 07:01:07.603640 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 07:01:07.750581 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 20 07:01:07.754420 dbus-daemon[1565]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 20 07:01:07.756036 dbus-daemon[1565]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.9' (uid=0 pid=1650 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 20 07:01:07.795861 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 07:01:07.796452 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 07:01:07.852324 systemd[1]: Starting polkit.service - Authorization Manager... Jan 20 07:01:07.855002 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 07:01:07.973477 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 07:01:07.980199 systemd[1]: Started sshd@0-172.232.7.121:22-20.161.92.111:56394.service - OpenSSH per-connection server daemon (20.161.92.111:56394). Jan 20 07:01:08.120685 coreos-metadata[1641]: Jan 20 07:01:08.120 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 20 07:01:08.170232 containerd[1604]: time="2026-01-20T07:01:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 07:01:08.174068 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 07:01:08.183249 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 07:01:08.188540 systemd-timesyncd[1517]: Network configuration changed, trying to establish connection. Jan 20 07:01:08.191618 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 07:01:08.191827 containerd[1604]: time="2026-01-20T07:01:08.191538998Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 07:01:08.193901 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 07:01:08.264159 coreos-metadata[1641]: Jan 20 07:01:08.263 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jan 20 07:01:08.300889 containerd[1604]: time="2026-01-20T07:01:08.299759172Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.99µs" Jan 20 07:01:08.300889 containerd[1604]: time="2026-01-20T07:01:08.300864913Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 07:01:08.301138 containerd[1604]: time="2026-01-20T07:01:08.301093463Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 07:01:08.301178 containerd[1604]: time="2026-01-20T07:01:08.301134943Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 07:01:08.301716 containerd[1604]: time="2026-01-20T07:01:08.301673263Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 07:01:08.301767 containerd[1604]: time="2026-01-20T07:01:08.301718603Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 07:01:08.301941 containerd[1604]: time="2026-01-20T07:01:08.301890653Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 07:01:08.301941 containerd[1604]: time="2026-01-20T07:01:08.301930003Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 07:01:08.302397 containerd[1604]: time="2026-01-20T07:01:08.302341564Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 07:01:08.302439 containerd[1604]: time="2026-01-20T07:01:08.302405174Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 07:01:08.302467 containerd[1604]: time="2026-01-20T07:01:08.302435644Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 07:01:08.302467 containerd[1604]: time="2026-01-20T07:01:08.302458174Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 07:01:08.304463 containerd[1604]: time="2026-01-20T07:01:08.304054655Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 07:01:08.304463 containerd[1604]: time="2026-01-20T07:01:08.304086925Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 07:01:08.304463 containerd[1604]: time="2026-01-20T07:01:08.304231985Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 07:01:08.304727 containerd[1604]: time="2026-01-20T07:01:08.304691495Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 07:01:08.304783 containerd[1604]: time="2026-01-20T07:01:08.304746065Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 07:01:08.304783 containerd[1604]: time="2026-01-20T07:01:08.304766655Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 07:01:08.304843 containerd[1604]: time="2026-01-20T07:01:08.304827085Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 07:01:08.305155 containerd[1604]: time="2026-01-20T07:01:08.305112075Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 07:01:08.305275 containerd[1604]: time="2026-01-20T07:01:08.305244245Z" level=info msg="metadata content store policy set" policy=shared Jan 20 07:01:08.313409 containerd[1604]: time="2026-01-20T07:01:08.313346999Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 07:01:08.313488 containerd[1604]: time="2026-01-20T07:01:08.313472469Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 07:01:08.313742 containerd[1604]: time="2026-01-20T07:01:08.313697919Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 07:01:08.313742 containerd[1604]: time="2026-01-20T07:01:08.313727249Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 07:01:08.313742 containerd[1604]: time="2026-01-20T07:01:08.313742769Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 07:01:08.313848 containerd[1604]: time="2026-01-20T07:01:08.313761939Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 07:01:08.313848 containerd[1604]: time="2026-01-20T07:01:08.313774439Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 07:01:08.313848 containerd[1604]: time="2026-01-20T07:01:08.313844929Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 07:01:08.313904 containerd[1604]: time="2026-01-20T07:01:08.313863639Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 07:01:08.313904 containerd[1604]: time="2026-01-20T07:01:08.313893829Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 07:01:08.313953 containerd[1604]: time="2026-01-20T07:01:08.313910159Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 07:01:08.313953 containerd[1604]: time="2026-01-20T07:01:08.313921779Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 07:01:08.313953 containerd[1604]: time="2026-01-20T07:01:08.313933009Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 07:01:08.313953 containerd[1604]: time="2026-01-20T07:01:08.313946619Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 07:01:08.314166 containerd[1604]: time="2026-01-20T07:01:08.314136710Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 07:01:08.314199 containerd[1604]: time="2026-01-20T07:01:08.314169830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 07:01:08.314199 containerd[1604]: time="2026-01-20T07:01:08.314186630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 07:01:08.314199 containerd[1604]: time="2026-01-20T07:01:08.314197660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 07:01:08.314308 containerd[1604]: time="2026-01-20T07:01:08.314209210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 07:01:08.314308 containerd[1604]: time="2026-01-20T07:01:08.314220460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 07:01:08.314308 containerd[1604]: time="2026-01-20T07:01:08.314233340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 07:01:08.314308 containerd[1604]: time="2026-01-20T07:01:08.314251350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 07:01:08.314308 containerd[1604]: time="2026-01-20T07:01:08.314262650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 07:01:08.314308 containerd[1604]: time="2026-01-20T07:01:08.314273560Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 07:01:08.314308 containerd[1604]: time="2026-01-20T07:01:08.314289080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 07:01:08.314482 containerd[1604]: time="2026-01-20T07:01:08.314314020Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 07:01:08.314482 containerd[1604]: time="2026-01-20T07:01:08.314408130Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 07:01:08.314482 containerd[1604]: time="2026-01-20T07:01:08.314427360Z" level=info msg="Start snapshots syncer" Jan 20 07:01:08.314482 containerd[1604]: time="2026-01-20T07:01:08.314480330Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 07:01:08.339848 containerd[1604]: time="2026-01-20T07:01:08.331155308Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 07:01:08.339848 containerd[1604]: time="2026-01-20T07:01:08.331265388Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331359998Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331616798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331643058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331657208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331668768Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331681668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331693728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331713708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331724938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331736458Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331821048Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331861538Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 07:01:08.340312 containerd[1604]: time="2026-01-20T07:01:08.331879168Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 07:01:08.340687 containerd[1604]: time="2026-01-20T07:01:08.331889768Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 07:01:08.340687 containerd[1604]: time="2026-01-20T07:01:08.331899288Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 07:01:08.340687 containerd[1604]: time="2026-01-20T07:01:08.331925478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 07:01:08.340687 containerd[1604]: time="2026-01-20T07:01:08.331942508Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 07:01:08.340687 containerd[1604]: time="2026-01-20T07:01:08.331956018Z" level=info msg="runtime interface created" Jan 20 07:01:08.340687 containerd[1604]: time="2026-01-20T07:01:08.331962358Z" level=info msg="created NRI interface" Jan 20 07:01:08.340687 containerd[1604]: time="2026-01-20T07:01:08.331972668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 07:01:08.340687 containerd[1604]: time="2026-01-20T07:01:08.331987088Z" level=info msg="Connect containerd service" Jan 20 07:01:08.340687 containerd[1604]: time="2026-01-20T07:01:08.332036168Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 07:01:08.340687 containerd[1604]: time="2026-01-20T07:01:08.333751799Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 07:01:08.412618 coreos-metadata[1641]: Jan 20 07:01:08.405 INFO Fetch successful Jan 20 07:01:08.537398 polkitd[1682]: Started polkitd version 126 Jan 20 07:01:08.568506 polkitd[1682]: Loading rules from directory /etc/polkit-1/rules.d Jan 20 07:01:08.569227 polkitd[1682]: Loading rules from directory /run/polkit-1/rules.d Jan 20 07:01:08.569302 polkitd[1682]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 20 07:01:08.569528 polkitd[1682]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 20 07:01:08.569574 polkitd[1682]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 20 07:01:08.569628 polkitd[1682]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 20 07:01:08.570643 polkitd[1682]: Finished loading, compiling and executing 2 rules Jan 20 07:01:08.571122 systemd[1]: Started polkit.service - Authorization Manager. Jan 20 07:01:08.574023 dbus-daemon[1565]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 20 07:01:08.574689 polkitd[1682]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 20 07:01:08.650165 systemd-hostnamed[1650]: Hostname set to <172-232-7-121> (transient) Jan 20 07:01:08.676515 systemd-resolved[1282]: System hostname changed to '172-232-7-121'. Jan 20 07:01:08.778451 update-ssh-keys[1696]: Updated "/home/core/.ssh/authorized_keys" Jan 20 07:01:08.780490 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 20 07:01:08.792913 systemd[1]: Finished sshkeys.service. Jan 20 07:01:08.811809 coreos-metadata[1564]: Jan 20 07:01:08.811 INFO Putting http://169.254.169.254/v1/token: Attempt #3 Jan 20 07:01:08.906863 coreos-metadata[1564]: Jan 20 07:01:08.906 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jan 20 07:01:08.984964 sshd[1685]: Accepted publickey for core from 20.161.92.111 port 56394 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:01:08.971371 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:01:08.985138 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 07:01:08.991643 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 07:01:09.105245 systemd-logind[1577]: New session 1 of user core. Jan 20 07:01:09.203810 coreos-metadata[1564]: Jan 20 07:01:09.203 INFO Fetch successful Jan 20 07:01:09.203810 coreos-metadata[1564]: Jan 20 07:01:09.203 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jan 20 07:01:09.216966 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 07:01:09.227112 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 07:01:09.280454 (systemd)[1724]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:01:09.302452 containerd[1604]: time="2026-01-20T07:01:09.301110143Z" level=info msg="Start subscribing containerd event" Jan 20 07:01:09.302452 containerd[1604]: time="2026-01-20T07:01:09.301286503Z" level=info msg="Start recovering state" Jan 20 07:01:09.302452 containerd[1604]: time="2026-01-20T07:01:09.302364843Z" level=info msg="Start event monitor" Jan 20 07:01:09.302452 containerd[1604]: time="2026-01-20T07:01:09.302400793Z" level=info msg="Start cni network conf syncer for default" Jan 20 07:01:09.302452 containerd[1604]: time="2026-01-20T07:01:09.302419003Z" level=info msg="Start streaming server" Jan 20 07:01:09.302452 containerd[1604]: time="2026-01-20T07:01:09.302442693Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 07:01:09.302452 containerd[1604]: time="2026-01-20T07:01:09.302461713Z" level=info msg="runtime interface starting up..." Jan 20 07:01:09.303301 containerd[1604]: time="2026-01-20T07:01:09.302471253Z" level=info msg="starting plugins..." Jan 20 07:01:09.303301 containerd[1604]: time="2026-01-20T07:01:09.302507473Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 07:01:09.304745 containerd[1604]: time="2026-01-20T07:01:09.304677174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 07:01:09.305509 systemd-logind[1577]: New session 2 of user core. Jan 20 07:01:09.306309 containerd[1604]: time="2026-01-20T07:01:09.306287265Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 07:01:09.347075 tar[1600]: linux-amd64/README.md Jan 20 07:01:09.402263 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 07:01:09.407007 containerd[1604]: time="2026-01-20T07:01:09.405541235Z" level=info msg="containerd successfully booted in 1.240164s" Jan 20 07:01:09.419759 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 07:01:09.479604 coreos-metadata[1564]: Jan 20 07:01:09.478 INFO Fetch successful Jan 20 07:01:09.569646 systemd[1724]: Queued start job for default target default.target. Jan 20 07:01:09.581140 systemd[1724]: Created slice app.slice - User Application Slice. Jan 20 07:01:09.581651 systemd[1724]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 07:01:09.581672 systemd[1724]: Reached target paths.target - Paths. Jan 20 07:01:09.581729 systemd[1724]: Reached target timers.target - Timers. Jan 20 07:01:09.588315 systemd[1724]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 07:01:09.590293 systemd[1724]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 07:01:09.660206 systemd[1724]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 07:01:09.660368 systemd[1724]: Reached target sockets.target - Sockets. Jan 20 07:01:09.664306 systemd[1724]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 07:01:09.664507 systemd[1724]: Reached target basic.target - Basic System. Jan 20 07:01:09.664906 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 07:01:09.665656 systemd[1724]: Reached target default.target - Main User Target. Jan 20 07:01:09.665784 systemd[1724]: Startup finished in 305ms. Jan 20 07:01:09.678455 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 07:01:09.754000 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 07:01:09.756474 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 07:01:09.800884 systemd[1]: Started sshd@1-172.232.7.121:22-20.161.92.111:56408.service - OpenSSH per-connection server daemon (20.161.92.111:56408). Jan 20 07:01:10.084683 sshd[1760]: Accepted publickey for core from 20.161.92.111 port 56408 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:01:10.086398 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:01:10.094765 systemd-logind[1577]: New session 3 of user core. Jan 20 07:01:10.108057 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 07:01:10.192035 sshd[1764]: Connection closed by 20.161.92.111 port 56408 Jan 20 07:01:10.186613 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Jan 20 07:01:10.178883 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Jan 20 07:01:10.187819 systemd[1]: sshd@1-172.232.7.121:22-20.161.92.111:56408.service: Deactivated successfully. Jan 20 07:01:10.191589 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 07:01:10.197183 systemd-logind[1577]: Removed session 3. Jan 20 07:01:10.210911 systemd[1]: Started sshd@2-172.232.7.121:22-20.161.92.111:56424.service - OpenSSH per-connection server daemon (20.161.92.111:56424). Jan 20 07:01:10.427986 sshd[1770]: Accepted publickey for core from 20.161.92.111 port 56424 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:01:10.428637 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:01:10.481620 systemd-logind[1577]: New session 4 of user core. Jan 20 07:01:10.495873 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 07:01:10.616118 sshd[1774]: Connection closed by 20.161.92.111 port 56424 Jan 20 07:01:10.618755 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jan 20 07:01:10.625617 systemd[1]: sshd@2-172.232.7.121:22-20.161.92.111:56424.service: Deactivated successfully. Jan 20 07:01:10.630097 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 07:01:10.633461 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Jan 20 07:01:10.636203 systemd-logind[1577]: Removed session 4. Jan 20 07:01:11.792045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 07:01:11.794359 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 07:01:11.797103 systemd[1]: Startup finished in 6.110s (kernel) + 11.864s (initrd) + 12.415s (userspace) = 30.391s. Jan 20 07:01:11.806240 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 07:01:13.395025 kubelet[1784]: E0120 07:01:13.394780 1784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 07:01:13.400530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 07:01:13.400968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 07:01:13.402413 systemd[1]: kubelet.service: Consumed 3.537s CPU time, 265.9M memory peak. Jan 20 07:01:20.657766 systemd[1]: Started sshd@3-172.232.7.121:22-20.161.92.111:35256.service - OpenSSH per-connection server daemon (20.161.92.111:35256). Jan 20 07:01:20.871771 sshd[1796]: Accepted publickey for core from 20.161.92.111 port 35256 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:01:20.875685 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:01:20.885693 systemd-logind[1577]: New session 5 of user core. Jan 20 07:01:20.892767 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 07:01:20.973731 sshd[1800]: Connection closed by 20.161.92.111 port 35256 Jan 20 07:01:20.974805 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Jan 20 07:01:20.980344 systemd[1]: sshd@3-172.232.7.121:22-20.161.92.111:35256.service: Deactivated successfully. Jan 20 07:01:20.983066 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 07:01:20.984075 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Jan 20 07:01:20.986849 systemd-logind[1577]: Removed session 5. Jan 20 07:01:21.005612 systemd[1]: Started sshd@4-172.232.7.121:22-20.161.92.111:35268.service - OpenSSH per-connection server daemon (20.161.92.111:35268). Jan 20 07:01:21.170834 sshd[1806]: Accepted publickey for core from 20.161.92.111 port 35268 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:01:21.173108 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:01:21.181444 systemd-logind[1577]: New session 6 of user core. Jan 20 07:01:21.188845 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 07:01:21.244201 sshd[1810]: Connection closed by 20.161.92.111 port 35268 Jan 20 07:01:21.245946 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Jan 20 07:01:21.252170 systemd[1]: sshd@4-172.232.7.121:22-20.161.92.111:35268.service: Deactivated successfully. Jan 20 07:01:21.254478 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 07:01:21.256378 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Jan 20 07:01:21.257799 systemd-logind[1577]: Removed session 6. Jan 20 07:01:21.277140 systemd[1]: Started sshd@5-172.232.7.121:22-20.161.92.111:49632.service - OpenSSH per-connection server daemon (20.161.92.111:49632). Jan 20 07:01:21.435678 sshd[1816]: Accepted publickey for core from 20.161.92.111 port 49632 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:01:21.438174 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:01:21.444565 systemd-logind[1577]: New session 7 of user core. Jan 20 07:01:21.453858 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 07:01:21.513336 sshd[1820]: Connection closed by 20.161.92.111 port 49632 Jan 20 07:01:21.514179 sshd-session[1816]: pam_unix(sshd:session): session closed for user core Jan 20 07:01:21.519418 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Jan 20 07:01:21.521084 systemd[1]: sshd@5-172.232.7.121:22-20.161.92.111:49632.service: Deactivated successfully. Jan 20 07:01:21.523769 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 07:01:21.525829 systemd-logind[1577]: Removed session 7. Jan 20 07:01:21.543970 systemd[1]: Started sshd@6-172.232.7.121:22-20.161.92.111:49640.service - OpenSSH per-connection server daemon (20.161.92.111:49640). Jan 20 07:01:21.700630 sshd[1827]: Accepted publickey for core from 20.161.92.111 port 49640 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:01:21.703164 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:01:21.709421 systemd-logind[1577]: New session 8 of user core. Jan 20 07:01:21.720858 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 07:01:21.783330 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 07:01:21.783879 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 07:01:21.801306 sudo[1832]: pam_unix(sudo:session): session closed for user root Jan 20 07:01:21.823527 sshd[1831]: Connection closed by 20.161.92.111 port 49640 Jan 20 07:01:21.824810 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jan 20 07:01:21.832351 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Jan 20 07:01:21.832933 systemd[1]: sshd@6-172.232.7.121:22-20.161.92.111:49640.service: Deactivated successfully. Jan 20 07:01:21.835833 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 07:01:21.838474 systemd-logind[1577]: Removed session 8. Jan 20 07:01:21.857520 systemd[1]: Started sshd@7-172.232.7.121:22-20.161.92.111:49656.service - OpenSSH per-connection server daemon (20.161.92.111:49656). Jan 20 07:01:22.026843 sshd[1840]: Accepted publickey for core from 20.161.92.111 port 49656 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:01:22.029274 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:01:22.039291 systemd-logind[1577]: New session 9 of user core. Jan 20 07:01:22.047774 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 07:01:22.089112 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 07:01:22.089539 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 07:01:22.093073 sudo[1846]: pam_unix(sudo:session): session closed for user root Jan 20 07:01:22.101738 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 07:01:22.102153 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 07:01:22.111831 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 07:01:22.163000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 07:01:22.164406 augenrules[1870]: No rules Jan 20 07:01:22.165989 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 20 07:01:22.166126 kernel: audit: type=1305 audit(1768892482.163:228): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 07:01:22.169196 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 07:01:22.169870 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 07:01:22.173345 kernel: audit: type=1300 audit(1768892482.163:228): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd6981120 a2=420 a3=0 items=0 ppid=1851 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:22.163000 audit[1870]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd6981120 a2=420 a3=0 items=0 ppid=1851 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:22.171430 sudo[1845]: pam_unix(sudo:session): session closed for user root Jan 20 07:01:22.163000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 07:01:22.185822 kernel: audit: type=1327 audit(1768892482.163:228): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 07:01:22.185897 kernel: audit: type=1130 audit(1768892482.169:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.195596 kernel: audit: type=1131 audit(1768892482.169:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.206003 kernel: audit: type=1106 audit(1768892482.170:231): pid=1845 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.170000 audit[1845]: USER_END pid=1845 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.200507 sshd-session[1840]: pam_unix(sshd:session): session closed for user core Jan 20 07:01:22.206381 sshd[1844]: Connection closed by 20.161.92.111 port 49656 Jan 20 07:01:22.171000 audit[1845]: CRED_DISP pid=1845 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.210741 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Jan 20 07:01:22.222589 kernel: audit: type=1104 audit(1768892482.171:232): pid=1845 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.222697 kernel: audit: type=1106 audit(1768892482.206:233): pid=1840 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:01:22.206000 audit[1840]: USER_END pid=1840 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:01:22.214994 systemd[1]: sshd@7-172.232.7.121:22-20.161.92.111:49656.service: Deactivated successfully. Jan 20 07:01:22.219920 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 07:01:22.206000 audit[1840]: CRED_DISP pid=1840 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:01:22.225249 kernel: audit: type=1104 audit(1768892482.206:234): pid=1840 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:01:22.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.232.7.121:22-20.161.92.111:49656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.232791 kernel: audit: type=1131 audit(1768892482.214:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.232.7.121:22-20.161.92.111:49656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.232.7.121:22-20.161.92.111:49660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.249031 systemd[1]: Started sshd@8-172.232.7.121:22-20.161.92.111:49660.service - OpenSSH per-connection server daemon (20.161.92.111:49660). Jan 20 07:01:22.250874 systemd-logind[1577]: Removed session 9. Jan 20 07:01:22.436000 audit[1879]: USER_ACCT pid=1879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:01:22.438285 sshd[1879]: Accepted publickey for core from 20.161.92.111 port 49660 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:01:22.438000 audit[1879]: CRED_ACQ pid=1879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:01:22.438000 audit[1879]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb25552f0 a2=3 a3=0 items=0 ppid=1 pid=1879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:22.438000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:01:22.439925 sshd-session[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:01:22.447170 systemd-logind[1577]: New session 10 of user core. Jan 20 07:01:22.449946 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 07:01:22.453000 audit[1879]: USER_START pid=1879 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:01:22.456000 audit[1883]: CRED_ACQ pid=1883 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:01:22.500817 sudo[1884]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 07:01:22.501251 sudo[1884]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 07:01:22.499000 audit[1884]: USER_ACCT pid=1884 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.500000 audit[1884]: CRED_REFR pid=1884 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 07:01:22.500000 audit[1884]: USER_START pid=1884 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 07:01:23.532420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 07:01:23.536941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 07:01:24.178848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 07:01:24.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:24.212263 (kubelet)[1908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 07:01:24.618451 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 07:01:24.628820 (dockerd)[1916]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 07:01:24.640252 kubelet[1908]: E0120 07:01:24.640138 1908 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 07:01:24.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 07:01:24.653273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 07:01:24.653529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 07:01:24.654282 systemd[1]: kubelet.service: Consumed 988ms CPU time, 111.2M memory peak. Jan 20 07:01:25.925228 dockerd[1916]: time="2026-01-20T07:01:25.925025149Z" level=info msg="Starting up" Jan 20 07:01:25.927049 dockerd[1916]: time="2026-01-20T07:01:25.927010280Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 07:01:25.997686 dockerd[1916]: time="2026-01-20T07:01:25.997508585Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 07:01:26.095524 dockerd[1916]: time="2026-01-20T07:01:26.095422224Z" level=info msg="Loading containers: start." Jan 20 07:01:26.111732 kernel: Initializing XFRM netlink socket Jan 20 07:01:26.228000 audit[1965]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.228000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff57846a50 a2=0 a3=0 items=0 ppid=1916 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 07:01:26.232000 audit[1967]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.232000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff1ab81d70 a2=0 a3=0 items=0 ppid=1916 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 07:01:26.235000 audit[1969]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.235000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff68caa790 a2=0 a3=0 items=0 ppid=1916 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.235000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 07:01:26.238000 audit[1971]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.238000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee1e9aac0 a2=0 a3=0 items=0 ppid=1916 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.238000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 07:01:26.242000 audit[1973]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.242000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffec05982d0 a2=0 a3=0 items=0 ppid=1916 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 07:01:26.246000 audit[1975]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.246000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdf603a8d0 a2=0 a3=0 items=0 ppid=1916 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.246000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 07:01:26.249000 audit[1977]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.249000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc1bd25770 a2=0 a3=0 items=0 ppid=1916 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.249000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 07:01:26.254000 audit[1979]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.254000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffffaee60a0 a2=0 a3=0 items=0 ppid=1916 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.254000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 07:01:26.295000 audit[1982]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.295000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc09d8d700 a2=0 a3=0 items=0 ppid=1916 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 07:01:26.299000 audit[1984]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.299000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffb8973ac0 a2=0 a3=0 items=0 ppid=1916 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 07:01:26.303000 audit[1986]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.303000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffa211a350 a2=0 a3=0 items=0 ppid=1916 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 07:01:26.306000 audit[1988]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.306000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd3c3b78b0 a2=0 a3=0 items=0 ppid=1916 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.306000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 07:01:26.310000 audit[1990]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.310000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdb980ce30 a2=0 a3=0 items=0 ppid=1916 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.310000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 07:01:26.389000 audit[2020]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.389000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdb2b022f0 a2=0 a3=0 items=0 ppid=1916 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.389000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 07:01:26.394000 audit[2022]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.394000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc4e087080 a2=0 a3=0 items=0 ppid=1916 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.394000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 07:01:26.398000 audit[2024]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.398000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffd99dc70 a2=0 a3=0 items=0 ppid=1916 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 07:01:26.401000 audit[2026]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.401000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0f02d3f0 a2=0 a3=0 items=0 ppid=1916 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.401000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 07:01:26.405000 audit[2028]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.405000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0c627240 a2=0 a3=0 items=0 ppid=1916 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.405000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 07:01:26.408000 audit[2030]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.408000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeb1f8fb60 a2=0 a3=0 items=0 ppid=1916 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.408000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 07:01:26.412000 audit[2032]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.412000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd38a81bd0 a2=0 a3=0 items=0 ppid=1916 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 07:01:26.415000 audit[2034]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.415000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc92c6fa60 a2=0 a3=0 items=0 ppid=1916 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.415000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 07:01:26.419000 audit[2036]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.419000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fffd29d5670 a2=0 a3=0 items=0 ppid=1916 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.419000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 07:01:26.423000 audit[2038]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.423000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffccf5b0000 a2=0 a3=0 items=0 ppid=1916 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.423000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 07:01:26.426000 audit[2040]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.426000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffea77bb680 a2=0 a3=0 items=0 ppid=1916 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.426000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 07:01:26.429000 audit[2042]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.429000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffea366ea20 a2=0 a3=0 items=0 ppid=1916 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.429000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 07:01:26.432000 audit[2044]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.432000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff4de1f670 a2=0 a3=0 items=0 ppid=1916 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.432000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 07:01:26.442000 audit[2049]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.442000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc2538f610 a2=0 a3=0 items=0 ppid=1916 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 07:01:26.446000 audit[2051]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.446000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffdfa3afd0 a2=0 a3=0 items=0 ppid=1916 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.446000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 07:01:26.449000 audit[2053]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.449000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff61e14540 a2=0 a3=0 items=0 ppid=1916 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.449000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 07:01:26.453000 audit[2055]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.453000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff815b8ca0 a2=0 a3=0 items=0 ppid=1916 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.453000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 07:01:26.457000 audit[2057]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.457000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe78bd9340 a2=0 a3=0 items=0 ppid=1916 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.457000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 07:01:26.460000 audit[2059]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:26.460000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffdbe839e0 a2=0 a3=0 items=0 ppid=1916 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.460000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 07:01:26.478072 systemd-timesyncd[1517]: Network configuration changed, trying to establish connection. Jan 20 07:01:26.495000 audit[2063]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.495000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd5837a220 a2=0 a3=0 items=0 ppid=1916 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.495000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 07:01:26.502000 audit[2065]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.502000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff5d82f2a0 a2=0 a3=0 items=0 ppid=1916 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.502000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 07:01:26.516000 audit[2073]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.516000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fffd17ef410 a2=0 a3=0 items=0 ppid=1916 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.516000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 07:01:26.532000 audit[2079]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.532000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc3e74bb70 a2=0 a3=0 items=0 ppid=1916 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.532000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 07:01:26.536000 audit[2081]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.536000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe238ff4c0 a2=0 a3=0 items=0 ppid=1916 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.536000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 07:01:26.540000 audit[2083]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.540000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdbf499330 a2=0 a3=0 items=0 ppid=1916 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.540000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 07:01:26.544000 audit[2085]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.544000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffebe65bb40 a2=0 a3=0 items=0 ppid=1916 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.544000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 07:01:26.547000 audit[2087]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:26.547000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe42f377e0 a2=0 a3=0 items=0 ppid=1916 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:26.547000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 07:01:26.549267 systemd-networkd[1498]: docker0: Link UP Jan 20 07:01:26.553895 dockerd[1916]: time="2026-01-20T07:01:26.553828833Z" level=info msg="Loading containers: done." Jan 20 07:01:26.581010 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1186742482-merged.mount: Deactivated successfully. Jan 20 07:01:26.586459 dockerd[1916]: time="2026-01-20T07:01:26.586415459Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 07:01:26.587222 dockerd[1916]: time="2026-01-20T07:01:26.587190770Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 07:01:26.587451 dockerd[1916]: time="2026-01-20T07:01:26.587429440Z" level=info msg="Initializing buildkit" Jan 20 07:01:26.619194 dockerd[1916]: time="2026-01-20T07:01:26.619124086Z" level=info msg="Completed buildkit initialization" Jan 20 07:01:26.629740 dockerd[1916]: time="2026-01-20T07:01:26.629609331Z" level=info msg="Daemon has completed initialization" Jan 20 07:01:26.629937 dockerd[1916]: time="2026-01-20T07:01:26.629896491Z" level=info msg="API listen on /run/docker.sock" Jan 20 07:01:26.630374 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 07:01:26.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:27.739749 systemd-resolved[1282]: Clock change detected. Flushing caches. Jan 20 07:01:27.741305 systemd-timesyncd[1517]: Contacted time server [2600:3c06::f03c:94ff:fee2:9c28]:123 (2.flatcar.pool.ntp.org). Jan 20 07:01:27.741402 systemd-timesyncd[1517]: Initial clock synchronization to Tue 2026-01-20 07:01:27.738712 UTC. Jan 20 07:01:29.076808 containerd[1604]: time="2026-01-20T07:01:29.076639783Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 07:01:30.199075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount418066230.mount: Deactivated successfully. Jan 20 07:01:32.978174 containerd[1604]: time="2026-01-20T07:01:32.977459922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:32.980685 containerd[1604]: time="2026-01-20T07:01:32.978815692Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Jan 20 07:01:32.980685 containerd[1604]: time="2026-01-20T07:01:32.980110363Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:32.982143 containerd[1604]: time="2026-01-20T07:01:32.982048434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:32.985210 containerd[1604]: time="2026-01-20T07:01:32.983051245Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 3.906235752s" Jan 20 07:01:32.985210 containerd[1604]: time="2026-01-20T07:01:32.983103895Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 20 07:01:32.987582 containerd[1604]: time="2026-01-20T07:01:32.987500957Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 07:01:35.783940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 07:01:35.795861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 07:01:36.368130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 07:01:36.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:36.370931 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 20 07:01:36.371141 kernel: audit: type=1130 audit(1768892496.368:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:36.400328 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 07:01:36.680784 kubelet[2199]: E0120 07:01:36.680592 2199 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 07:01:36.694955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 07:01:36.695218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 07:01:36.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 07:01:36.703205 kernel: audit: type=1131 audit(1768892496.694:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 07:01:36.695904 systemd[1]: kubelet.service: Consumed 784ms CPU time, 108.4M memory peak. Jan 20 07:01:36.836941 containerd[1604]: time="2026-01-20T07:01:36.836782300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:36.838862 containerd[1604]: time="2026-01-20T07:01:36.838787541Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 20 07:01:36.841039 containerd[1604]: time="2026-01-20T07:01:36.839037341Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:36.845389 containerd[1604]: time="2026-01-20T07:01:36.845145094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:36.846322 containerd[1604]: time="2026-01-20T07:01:36.846283185Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 3.858727338s" Jan 20 07:01:36.846388 containerd[1604]: time="2026-01-20T07:01:36.846374595Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 20 07:01:36.850982 containerd[1604]: time="2026-01-20T07:01:36.850905907Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 07:01:39.480371 containerd[1604]: time="2026-01-20T07:01:39.479316440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:39.484043 containerd[1604]: time="2026-01-20T07:01:39.480855951Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 20 07:01:39.484638 containerd[1604]: time="2026-01-20T07:01:39.484321173Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:39.485953 containerd[1604]: time="2026-01-20T07:01:39.485895064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:39.487787 containerd[1604]: time="2026-01-20T07:01:39.487700455Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.636702468s" Jan 20 07:01:39.487861 containerd[1604]: time="2026-01-20T07:01:39.487785125Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 20 07:01:39.489755 containerd[1604]: time="2026-01-20T07:01:39.489717306Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 07:01:39.722371 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 20 07:01:39.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:39.731290 kernel: audit: type=1131 audit(1768892499.722:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:39.768000 audit: BPF prog-id=63 op=UNLOAD Jan 20 07:01:39.772214 kernel: audit: type=1334 audit(1768892499.768:291): prog-id=63 op=UNLOAD Jan 20 07:01:41.990142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2505468560.mount: Deactivated successfully. Jan 20 07:01:43.630888 containerd[1604]: time="2026-01-20T07:01:43.630699095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:43.632518 containerd[1604]: time="2026-01-20T07:01:43.632461396Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 20 07:01:43.633658 containerd[1604]: time="2026-01-20T07:01:43.633596376Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:43.637019 containerd[1604]: time="2026-01-20T07:01:43.635796727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:43.637019 containerd[1604]: time="2026-01-20T07:01:43.636428508Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 4.146664792s" Jan 20 07:01:43.637019 containerd[1604]: time="2026-01-20T07:01:43.636479948Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 07:01:43.639611 containerd[1604]: time="2026-01-20T07:01:43.639583079Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 07:01:44.640108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3968539125.mount: Deactivated successfully. Jan 20 07:01:46.845611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 07:01:46.853253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 07:01:46.930719 containerd[1604]: time="2026-01-20T07:01:46.930389173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:46.935882 containerd[1604]: time="2026-01-20T07:01:46.935488766Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Jan 20 07:01:46.938151 containerd[1604]: time="2026-01-20T07:01:46.938076037Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:46.943363 containerd[1604]: time="2026-01-20T07:01:46.943303200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:46.946243 containerd[1604]: time="2026-01-20T07:01:46.946195081Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.306531342s" Jan 20 07:01:46.946526 containerd[1604]: time="2026-01-20T07:01:46.946272501Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 20 07:01:46.951200 containerd[1604]: time="2026-01-20T07:01:46.950104363Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 07:01:47.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:47.322485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 07:01:47.331212 kernel: audit: type=1130 audit(1768892507.321:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:47.341734 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 07:01:47.491648 kubelet[2281]: E0120 07:01:47.491568 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 07:01:47.496033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 07:01:47.496345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 07:01:47.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 07:01:47.496956 systemd[1]: kubelet.service: Consumed 577ms CPU time, 108.2M memory peak. Jan 20 07:01:47.503210 kernel: audit: type=1131 audit(1768892507.495:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 07:01:47.766324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3961537433.mount: Deactivated successfully. Jan 20 07:01:47.773033 containerd[1604]: time="2026-01-20T07:01:47.772947574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 07:01:47.775863 containerd[1604]: time="2026-01-20T07:01:47.774484105Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 07:01:47.775863 containerd[1604]: time="2026-01-20T07:01:47.774720565Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 07:01:47.776662 containerd[1604]: time="2026-01-20T07:01:47.776616036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 07:01:47.777549 containerd[1604]: time="2026-01-20T07:01:47.777310397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 827.167814ms" Jan 20 07:01:47.777667 containerd[1604]: time="2026-01-20T07:01:47.777647667Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 07:01:47.778569 containerd[1604]: time="2026-01-20T07:01:47.778530577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 07:01:48.617413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036389645.mount: Deactivated successfully. Jan 20 07:01:52.461440 update_engine[1586]: I20260120 07:01:52.460919 1586 update_attempter.cc:509] Updating boot flags... Jan 20 07:01:53.058918 containerd[1604]: time="2026-01-20T07:01:53.058725915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:53.061537 containerd[1604]: time="2026-01-20T07:01:53.061491517Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Jan 20 07:01:53.071244 containerd[1604]: time="2026-01-20T07:01:53.068983981Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:53.076268 containerd[1604]: time="2026-01-20T07:01:53.076226834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:01:53.078387 containerd[1604]: time="2026-01-20T07:01:53.078336765Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.299654188s" Jan 20 07:01:53.078627 containerd[1604]: time="2026-01-20T07:01:53.078584565Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 20 07:01:56.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:56.362464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 07:01:56.362932 systemd[1]: kubelet.service: Consumed 577ms CPU time, 108.2M memory peak. Jan 20 07:01:56.372053 kernel: audit: type=1130 audit(1768892516.361:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:56.370408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 07:01:56.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:56.380272 kernel: audit: type=1131 audit(1768892516.361:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:56.421374 systemd[1]: Reload requested from client PID 2394 ('systemctl') (unit session-10.scope)... Jan 20 07:01:56.421432 systemd[1]: Reloading... Jan 20 07:01:56.616221 zram_generator::config[2443]: No configuration found. Jan 20 07:01:56.921308 systemd[1]: Reloading finished in 499 ms. Jan 20 07:01:56.952811 kernel: audit: type=1334 audit(1768892516.944:296): prog-id=67 op=LOAD Jan 20 07:01:56.952938 kernel: audit: type=1334 audit(1768892516.944:297): prog-id=59 op=UNLOAD Jan 20 07:01:56.944000 audit: BPF prog-id=67 op=LOAD Jan 20 07:01:56.944000 audit: BPF prog-id=59 op=UNLOAD Jan 20 07:01:56.945000 audit: BPF prog-id=68 op=LOAD Jan 20 07:01:56.958198 kernel: audit: type=1334 audit(1768892516.945:298): prog-id=68 op=LOAD Jan 20 07:01:56.945000 audit: BPF prog-id=47 op=UNLOAD Jan 20 07:01:56.963202 kernel: audit: type=1334 audit(1768892516.945:299): prog-id=47 op=UNLOAD Jan 20 07:01:56.945000 audit: BPF prog-id=69 op=LOAD Jan 20 07:01:56.945000 audit: BPF prog-id=70 op=LOAD Jan 20 07:01:56.967305 kernel: audit: type=1334 audit(1768892516.945:300): prog-id=69 op=LOAD Jan 20 07:01:56.967357 kernel: audit: type=1334 audit(1768892516.945:301): prog-id=70 op=LOAD Jan 20 07:01:56.945000 audit: BPF prog-id=48 op=UNLOAD Jan 20 07:01:56.969681 kernel: audit: type=1334 audit(1768892516.945:302): prog-id=48 op=UNLOAD Jan 20 07:01:56.945000 audit: BPF prog-id=49 op=UNLOAD Jan 20 07:01:56.947000 audit: BPF prog-id=71 op=LOAD Jan 20 07:01:56.947000 audit: BPF prog-id=46 op=UNLOAD Jan 20 07:01:56.976195 kernel: audit: type=1334 audit(1768892516.945:303): prog-id=49 op=UNLOAD Jan 20 07:01:56.954000 audit: BPF prog-id=72 op=LOAD Jan 20 07:01:56.954000 audit: BPF prog-id=60 op=UNLOAD Jan 20 07:01:56.954000 audit: BPF prog-id=73 op=LOAD Jan 20 07:01:56.954000 audit: BPF prog-id=74 op=LOAD Jan 20 07:01:56.954000 audit: BPF prog-id=61 op=UNLOAD Jan 20 07:01:56.954000 audit: BPF prog-id=62 op=UNLOAD Jan 20 07:01:56.954000 audit: BPF prog-id=75 op=LOAD Jan 20 07:01:56.954000 audit: BPF prog-id=76 op=LOAD Jan 20 07:01:56.954000 audit: BPF prog-id=56 op=UNLOAD Jan 20 07:01:56.954000 audit: BPF prog-id=57 op=UNLOAD Jan 20 07:01:56.957000 audit: BPF prog-id=77 op=LOAD Jan 20 07:01:56.957000 audit: BPF prog-id=53 op=UNLOAD Jan 20 07:01:56.959000 audit: BPF prog-id=78 op=LOAD Jan 20 07:01:56.959000 audit: BPF prog-id=79 op=LOAD Jan 20 07:01:56.959000 audit: BPF prog-id=54 op=UNLOAD Jan 20 07:01:56.959000 audit: BPF prog-id=55 op=UNLOAD Jan 20 07:01:56.959000 audit: BPF prog-id=80 op=LOAD Jan 20 07:01:56.959000 audit: BPF prog-id=43 op=UNLOAD Jan 20 07:01:56.959000 audit: BPF prog-id=81 op=LOAD Jan 20 07:01:56.959000 audit: BPF prog-id=82 op=LOAD Jan 20 07:01:56.959000 audit: BPF prog-id=44 op=UNLOAD Jan 20 07:01:56.959000 audit: BPF prog-id=45 op=UNLOAD Jan 20 07:01:56.962000 audit: BPF prog-id=83 op=LOAD Jan 20 07:01:56.962000 audit: BPF prog-id=50 op=UNLOAD Jan 20 07:01:56.962000 audit: BPF prog-id=84 op=LOAD Jan 20 07:01:56.962000 audit: BPF prog-id=85 op=LOAD Jan 20 07:01:56.962000 audit: BPF prog-id=51 op=UNLOAD Jan 20 07:01:56.962000 audit: BPF prog-id=52 op=UNLOAD Jan 20 07:01:56.973000 audit: BPF prog-id=86 op=LOAD Jan 20 07:01:56.973000 audit: BPF prog-id=58 op=UNLOAD Jan 20 07:01:56.976000 audit: BPF prog-id=87 op=LOAD Jan 20 07:01:56.976000 audit: BPF prog-id=66 op=UNLOAD Jan 20 07:01:56.999372 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 07:01:56.999758 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 07:01:57.000396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 07:01:57.000490 systemd[1]: kubelet.service: Consumed 358ms CPU time, 98.5M memory peak. Jan 20 07:01:56.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 07:01:57.003937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 07:01:57.261420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 07:01:57.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:01:57.273682 (kubelet)[2495]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 07:01:57.362425 kubelet[2495]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 07:01:57.362425 kubelet[2495]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 07:01:57.362425 kubelet[2495]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 07:01:57.362937 kubelet[2495]: I0120 07:01:57.362540 2495 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 07:01:57.867444 kubelet[2495]: I0120 07:01:57.867366 2495 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 07:01:57.867444 kubelet[2495]: I0120 07:01:57.867441 2495 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 07:01:57.868166 kubelet[2495]: I0120 07:01:57.868007 2495 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 07:01:57.904870 kubelet[2495]: E0120 07:01:57.904813 2495 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.232.7.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.7.121:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 07:01:57.908214 kubelet[2495]: I0120 07:01:57.908058 2495 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 07:01:57.923616 kubelet[2495]: I0120 07:01:57.923573 2495 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 07:01:57.936707 kubelet[2495]: I0120 07:01:57.936647 2495 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 07:01:57.937352 kubelet[2495]: I0120 07:01:57.937282 2495 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 07:01:57.937922 kubelet[2495]: I0120 07:01:57.937336 2495 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-7-121","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 07:01:57.938275 kubelet[2495]: I0120 07:01:57.937953 2495 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 07:01:57.938275 kubelet[2495]: I0120 07:01:57.937977 2495 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 07:01:57.938393 kubelet[2495]: I0120 07:01:57.938333 2495 state_mem.go:36] "Initialized new in-memory state store" Jan 20 07:01:57.941090 kubelet[2495]: I0120 07:01:57.941064 2495 kubelet.go:480] "Attempting to sync node with API server" Jan 20 07:01:57.941090 kubelet[2495]: I0120 07:01:57.941097 2495 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 07:01:57.941239 kubelet[2495]: I0120 07:01:57.941155 2495 kubelet.go:386] "Adding apiserver pod source" Jan 20 07:01:57.941239 kubelet[2495]: I0120 07:01:57.941212 2495 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 07:01:57.948338 kubelet[2495]: E0120 07:01:57.948157 2495 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.7.121:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-7-121&limit=500&resourceVersion=0\": dial tcp 172.232.7.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 07:01:57.948338 kubelet[2495]: E0120 07:01:57.948274 2495 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.7.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.7.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 07:01:57.948827 kubelet[2495]: I0120 07:01:57.948805 2495 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 07:01:57.949346 kubelet[2495]: I0120 07:01:57.949322 2495 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 07:01:57.950296 kubelet[2495]: W0120 07:01:57.950270 2495 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 07:01:57.979337 kubelet[2495]: I0120 07:01:57.979299 2495 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 07:01:57.979479 kubelet[2495]: I0120 07:01:57.979400 2495 server.go:1289] "Started kubelet" Jan 20 07:01:57.985211 kubelet[2495]: I0120 07:01:57.984412 2495 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 07:01:57.985480 kubelet[2495]: E0120 07:01:57.984006 2495 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.7.121:6443/api/v1/namespaces/default/events\": dial tcp 172.232.7.121:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-7-121.188c5e5f84011e12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-7-121,UID:172-232-7-121,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-7-121,},FirstTimestamp:2026-01-20 07:01:57.979332114 +0000 UTC m=+0.689066265,LastTimestamp:2026-01-20 07:01:57.979332114 +0000 UTC m=+0.689066265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-7-121,}" Jan 20 07:01:57.986742 kubelet[2495]: I0120 07:01:57.986713 2495 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 07:01:57.986986 kubelet[2495]: E0120 07:01:57.986962 2495 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-232-7-121\" not found" Jan 20 07:01:57.987604 kubelet[2495]: I0120 07:01:57.987580 2495 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 07:01:57.987834 kubelet[2495]: I0120 07:01:57.987815 2495 reconciler.go:26] "Reconciler: start to sync state" Jan 20 07:01:57.988722 kubelet[2495]: E0120 07:01:57.988699 2495 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.7.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.7.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 07:01:57.994213 kubelet[2495]: I0120 07:01:57.991301 2495 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 07:01:57.995150 kubelet[2495]: E0120 07:01:57.995100 2495 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-121?timeout=10s\": dial tcp 172.232.7.121:6443: connect: connection refused" interval="200ms" Jan 20 07:01:57.995926 kubelet[2495]: I0120 07:01:57.995787 2495 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 07:01:57.996633 kubelet[2495]: I0120 07:01:57.996608 2495 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 07:01:57.999000 audit[2511]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:57.999000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff985bd7d0 a2=0 a3=0 items=0 ppid=2495 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:57.999000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 07:01:58.001000 audit[2512]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:58.001000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0949b460 a2=0 a3=0 items=0 ppid=2495 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.001000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 07:01:58.005407 kubelet[2495]: I0120 07:01:58.005385 2495 factory.go:223] Registration of the containerd container factory successfully Jan 20 07:01:58.006206 kubelet[2495]: I0120 07:01:58.005893 2495 factory.go:223] Registration of the systemd container factory successfully Jan 20 07:01:58.006515 kubelet[2495]: I0120 07:01:58.006474 2495 server.go:317] "Adding debug handlers to kubelet server" Jan 20 07:01:58.008000 audit[2514]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:58.008000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcca87ff90 a2=0 a3=0 items=0 ppid=2495 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 07:01:58.012000 audit[2516]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:58.012000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc5ab3cbe0 a2=0 a3=0 items=0 ppid=2495 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.012000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 07:01:58.017204 kubelet[2495]: I0120 07:01:57.991369 2495 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 07:01:58.017391 kubelet[2495]: I0120 07:01:58.017361 2495 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 07:01:58.022000 audit[2519]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:58.022000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff013cab40 a2=0 a3=0 items=0 ppid=2495 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 20 07:01:58.024488 kubelet[2495]: I0120 07:01:58.024388 2495 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 07:01:58.024000 audit[2520]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:58.024000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffd80422c0 a2=0 a3=0 items=0 ppid=2495 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.024000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 07:01:58.026282 kubelet[2495]: I0120 07:01:58.026108 2495 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 07:01:58.026282 kubelet[2495]: I0120 07:01:58.026151 2495 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 07:01:58.026282 kubelet[2495]: I0120 07:01:58.026217 2495 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 07:01:58.026282 kubelet[2495]: I0120 07:01:58.026236 2495 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 07:01:58.026424 kubelet[2495]: E0120 07:01:58.026316 2495 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 07:01:58.026000 audit[2521]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:58.026000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdafb2fa80 a2=0 a3=0 items=0 ppid=2495 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.026000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 07:01:58.028000 audit[2522]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:58.028000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8f959520 a2=0 a3=0 items=0 ppid=2495 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.028000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 07:01:58.029000 audit[2523]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:01:58.029000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe07eeb560 a2=0 a3=0 items=0 ppid=2495 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.029000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 07:01:58.031000 audit[2525]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:58.031000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff4570d30 a2=0 a3=0 items=0 ppid=2495 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 07:01:58.032000 audit[2526]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:58.032000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea6fe2740 a2=0 a3=0 items=0 ppid=2495 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.032000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 07:01:58.035000 audit[2527]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:01:58.035000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7ce5aec0 a2=0 a3=0 items=0 ppid=2495 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.035000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 07:01:58.037541 kubelet[2495]: E0120 07:01:58.037119 2495 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 07:01:58.037541 kubelet[2495]: E0120 07:01:58.037309 2495 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.232.7.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.7.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 07:01:58.047681 kubelet[2495]: I0120 07:01:58.047615 2495 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 07:01:58.047681 kubelet[2495]: I0120 07:01:58.047655 2495 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 07:01:58.047681 kubelet[2495]: I0120 07:01:58.047677 2495 state_mem.go:36] "Initialized new in-memory state store" Jan 20 07:01:58.049943 kubelet[2495]: I0120 07:01:58.049906 2495 policy_none.go:49] "None policy: Start" Jan 20 07:01:58.050014 kubelet[2495]: I0120 07:01:58.049969 2495 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 07:01:58.050051 kubelet[2495]: I0120 07:01:58.050019 2495 state_mem.go:35] "Initializing new in-memory state store" Jan 20 07:01:58.060112 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 07:01:58.079012 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 07:01:58.085001 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 07:01:58.087833 kubelet[2495]: E0120 07:01:58.087783 2495 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-232-7-121\" not found" Jan 20 07:01:58.114905 kubelet[2495]: E0120 07:01:58.114848 2495 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 07:01:58.115170 kubelet[2495]: I0120 07:01:58.115141 2495 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 07:01:58.115265 kubelet[2495]: I0120 07:01:58.115201 2495 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 07:01:58.116476 kubelet[2495]: I0120 07:01:58.116030 2495 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 07:01:58.121470 kubelet[2495]: E0120 07:01:58.119641 2495 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 07:01:58.121470 kubelet[2495]: E0120 07:01:58.119737 2495 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-7-121\" not found" Jan 20 07:01:58.143363 systemd[1]: Created slice kubepods-burstable-pod9b9343c42e5315bb566658a04dd3da6a.slice - libcontainer container kubepods-burstable-pod9b9343c42e5315bb566658a04dd3da6a.slice. Jan 20 07:01:58.153951 kubelet[2495]: E0120 07:01:58.153504 2495 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-121\" not found" node="172-232-7-121" Jan 20 07:01:58.157768 systemd[1]: Created slice kubepods-burstable-podf910d39d41830cb164800cf2795b1e09.slice - libcontainer container kubepods-burstable-podf910d39d41830cb164800cf2795b1e09.slice. Jan 20 07:01:58.171460 kubelet[2495]: E0120 07:01:58.171415 2495 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-121\" not found" node="172-232-7-121" Jan 20 07:01:58.175261 systemd[1]: Created slice kubepods-burstable-pod45e5a40cddacbca27ac5f0615d062e27.slice - libcontainer container kubepods-burstable-pod45e5a40cddacbca27ac5f0615d062e27.slice. Jan 20 07:01:58.179200 kubelet[2495]: E0120 07:01:58.179155 2495 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-121\" not found" node="172-232-7-121" Jan 20 07:01:58.189420 kubelet[2495]: I0120 07:01:58.189374 2495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f910d39d41830cb164800cf2795b1e09-kubeconfig\") pod \"kube-controller-manager-172-232-7-121\" (UID: \"f910d39d41830cb164800cf2795b1e09\") " pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:01:58.189420 kubelet[2495]: I0120 07:01:58.189419 2495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f910d39d41830cb164800cf2795b1e09-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-7-121\" (UID: \"f910d39d41830cb164800cf2795b1e09\") " pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:01:58.189420 kubelet[2495]: I0120 07:01:58.189629 2495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b9343c42e5315bb566658a04dd3da6a-ca-certs\") pod \"kube-apiserver-172-232-7-121\" (UID: \"9b9343c42e5315bb566658a04dd3da6a\") " pod="kube-system/kube-apiserver-172-232-7-121" Jan 20 07:01:58.189420 kubelet[2495]: I0120 07:01:58.189719 2495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f910d39d41830cb164800cf2795b1e09-flexvolume-dir\") pod \"kube-controller-manager-172-232-7-121\" (UID: \"f910d39d41830cb164800cf2795b1e09\") " pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:01:58.189420 kubelet[2495]: I0120 07:01:58.189747 2495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f910d39d41830cb164800cf2795b1e09-k8s-certs\") pod \"kube-controller-manager-172-232-7-121\" (UID: \"f910d39d41830cb164800cf2795b1e09\") " pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:01:58.190106 kubelet[2495]: I0120 07:01:58.189766 2495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45e5a40cddacbca27ac5f0615d062e27-kubeconfig\") pod \"kube-scheduler-172-232-7-121\" (UID: \"45e5a40cddacbca27ac5f0615d062e27\") " pod="kube-system/kube-scheduler-172-232-7-121" Jan 20 07:01:58.190106 kubelet[2495]: I0120 07:01:58.189794 2495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b9343c42e5315bb566658a04dd3da6a-k8s-certs\") pod \"kube-apiserver-172-232-7-121\" (UID: \"9b9343c42e5315bb566658a04dd3da6a\") " pod="kube-system/kube-apiserver-172-232-7-121" Jan 20 07:01:58.190106 kubelet[2495]: I0120 07:01:58.189813 2495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b9343c42e5315bb566658a04dd3da6a-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-7-121\" (UID: \"9b9343c42e5315bb566658a04dd3da6a\") " pod="kube-system/kube-apiserver-172-232-7-121" Jan 20 07:01:58.190106 kubelet[2495]: I0120 07:01:58.189829 2495 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f910d39d41830cb164800cf2795b1e09-ca-certs\") pod \"kube-controller-manager-172-232-7-121\" (UID: \"f910d39d41830cb164800cf2795b1e09\") " pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:01:58.196055 kubelet[2495]: E0120 07:01:58.196010 2495 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-121?timeout=10s\": dial tcp 172.232.7.121:6443: connect: connection refused" interval="400ms" Jan 20 07:01:58.220227 kubelet[2495]: I0120 07:01:58.220022 2495 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-121" Jan 20 07:01:58.220515 kubelet[2495]: E0120 07:01:58.220479 2495 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.7.121:6443/api/v1/nodes\": dial tcp 172.232.7.121:6443: connect: connection refused" node="172-232-7-121" Jan 20 07:01:58.423366 kubelet[2495]: I0120 07:01:58.423260 2495 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-121" Jan 20 07:01:58.450098 kubelet[2495]: E0120 07:01:58.450040 2495 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.7.121:6443/api/v1/nodes\": dial tcp 172.232.7.121:6443: connect: connection refused" node="172-232-7-121" Jan 20 07:01:58.454736 kubelet[2495]: E0120 07:01:58.454712 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:01:58.456206 containerd[1604]: time="2026-01-20T07:01:58.455978012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-7-121,Uid:9b9343c42e5315bb566658a04dd3da6a,Namespace:kube-system,Attempt:0,}" Jan 20 07:01:58.476232 kubelet[2495]: E0120 07:01:58.474396 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:01:58.476696 containerd[1604]: time="2026-01-20T07:01:58.476645443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-7-121,Uid:f910d39d41830cb164800cf2795b1e09,Namespace:kube-system,Attempt:0,}" Jan 20 07:01:58.480892 kubelet[2495]: E0120 07:01:58.480424 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:01:58.481601 containerd[1604]: time="2026-01-20T07:01:58.481557795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-7-121,Uid:45e5a40cddacbca27ac5f0615d062e27,Namespace:kube-system,Attempt:0,}" Jan 20 07:01:58.597758 kubelet[2495]: E0120 07:01:58.597679 2495 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-121?timeout=10s\": dial tcp 172.232.7.121:6443: connect: connection refused" interval="800ms" Jan 20 07:01:58.621689 containerd[1604]: time="2026-01-20T07:01:58.621329565Z" level=info msg="connecting to shim 6ec57586365ecf370867b66a72cb485a89519189ba6819263534edb13737e80a" address="unix:///run/containerd/s/19085507c2e9d519d4a009b008c1427439d96ffa0a83233c8ae4f86e0e9b015c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:01:58.623238 containerd[1604]: time="2026-01-20T07:01:58.623125666Z" level=info msg="connecting to shim f0d57c4d18e20504a7da0d72b628feb3929884645702839a099a43477a2719b8" address="unix:///run/containerd/s/187730933c4a76f8ee58d36c11e69c9a23a70da84094b1c0fa7640d8f0cdb5b8" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:01:58.647226 containerd[1604]: time="2026-01-20T07:01:58.647111948Z" level=info msg="connecting to shim 04979801af3609330cc8e66cf78a05ac8abb7d357e493e8c0dcdb13eb66225b4" address="unix:///run/containerd/s/7df22e2f9e800da5ecea605f56562a3b1fa70a04dbecb8ede325c73d2d40e004" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:01:58.823203 kubelet[2495]: E0120 07:01:58.822389 2495 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.7.121:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-7-121&limit=500&resourceVersion=0\": dial tcp 172.232.7.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 07:01:58.823203 kubelet[2495]: E0120 07:01:58.822474 2495 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.7.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.7.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 07:01:58.853000 kubelet[2495]: I0120 07:01:58.852661 2495 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-121" Jan 20 07:01:58.853000 kubelet[2495]: E0120 07:01:58.852967 2495 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.7.121:6443/api/v1/nodes\": dial tcp 172.232.7.121:6443: connect: connection refused" node="172-232-7-121" Jan 20 07:01:58.870490 systemd[1]: Started cri-containerd-04979801af3609330cc8e66cf78a05ac8abb7d357e493e8c0dcdb13eb66225b4.scope - libcontainer container 04979801af3609330cc8e66cf78a05ac8abb7d357e493e8c0dcdb13eb66225b4. Jan 20 07:01:58.925608 systemd[1]: Started cri-containerd-6ec57586365ecf370867b66a72cb485a89519189ba6819263534edb13737e80a.scope - libcontainer container 6ec57586365ecf370867b66a72cb485a89519189ba6819263534edb13737e80a. Jan 20 07:01:58.960489 systemd[1]: Started cri-containerd-f0d57c4d18e20504a7da0d72b628feb3929884645702839a099a43477a2719b8.scope - libcontainer container f0d57c4d18e20504a7da0d72b628feb3929884645702839a099a43477a2719b8. Jan 20 07:01:58.991000 audit: BPF prog-id=88 op=LOAD Jan 20 07:01:58.992000 audit: BPF prog-id=89 op=LOAD Jan 20 07:01:58.992000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2569 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034393739383031616633363039333330636338653636636637386130 Jan 20 07:01:58.992000 audit: BPF prog-id=89 op=UNLOAD Jan 20 07:01:58.992000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034393739383031616633363039333330636338653636636637386130 Jan 20 07:01:58.993000 audit: BPF prog-id=90 op=LOAD Jan 20 07:01:58.993000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2569 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034393739383031616633363039333330636338653636636637386130 Jan 20 07:01:58.993000 audit: BPF prog-id=91 op=LOAD Jan 20 07:01:58.993000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2569 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034393739383031616633363039333330636338653636636637386130 Jan 20 07:01:58.993000 audit: BPF prog-id=91 op=UNLOAD Jan 20 07:01:58.993000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034393739383031616633363039333330636338653636636637386130 Jan 20 07:01:58.993000 audit: BPF prog-id=90 op=UNLOAD Jan 20 07:01:58.993000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034393739383031616633363039333330636338653636636637386130 Jan 20 07:01:58.993000 audit: BPF prog-id=92 op=LOAD Jan 20 07:01:58.993000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2569 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:58.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034393739383031616633363039333330636338653636636637386130 Jan 20 07:01:59.006000 audit: BPF prog-id=93 op=LOAD Jan 20 07:01:59.008000 audit: BPF prog-id=94 op=LOAD Jan 20 07:01:59.008000 audit[2587]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2556 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633537353836333635656366333730383637623636613732636234 Jan 20 07:01:59.008000 audit: BPF prog-id=94 op=UNLOAD Jan 20 07:01:59.008000 audit[2587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633537353836333635656366333730383637623636613732636234 Jan 20 07:01:59.008000 audit: BPF prog-id=95 op=LOAD Jan 20 07:01:59.008000 audit[2587]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2556 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633537353836333635656366333730383637623636613732636234 Jan 20 07:01:59.008000 audit: BPF prog-id=96 op=LOAD Jan 20 07:01:59.008000 audit[2587]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2556 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633537353836333635656366333730383637623636613732636234 Jan 20 07:01:59.009000 audit: BPF prog-id=96 op=UNLOAD Jan 20 07:01:59.009000 audit[2587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633537353836333635656366333730383637623636613732636234 Jan 20 07:01:59.009000 audit: BPF prog-id=95 op=UNLOAD Jan 20 07:01:59.009000 audit[2587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633537353836333635656366333730383637623636613732636234 Jan 20 07:01:59.009000 audit: BPF prog-id=97 op=LOAD Jan 20 07:01:59.009000 audit[2587]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2556 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633537353836333635656366333730383637623636613732636234 Jan 20 07:01:59.090000 audit: BPF prog-id=98 op=LOAD Jan 20 07:01:59.094000 audit: BPF prog-id=99 op=LOAD Jan 20 07:01:59.094000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2546 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643537633464313865323035303461376461306437326236323866 Jan 20 07:01:59.094000 audit: BPF prog-id=99 op=UNLOAD Jan 20 07:01:59.094000 audit[2605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2546 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643537633464313865323035303461376461306437326236323866 Jan 20 07:01:59.097000 audit: BPF prog-id=100 op=LOAD Jan 20 07:01:59.097000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2546 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643537633464313865323035303461376461306437326236323866 Jan 20 07:01:59.097000 audit: BPF prog-id=101 op=LOAD Jan 20 07:01:59.097000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2546 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643537633464313865323035303461376461306437326236323866 Jan 20 07:01:59.097000 audit: BPF prog-id=101 op=UNLOAD Jan 20 07:01:59.097000 audit[2605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2546 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643537633464313865323035303461376461306437326236323866 Jan 20 07:01:59.097000 audit: BPF prog-id=100 op=UNLOAD Jan 20 07:01:59.097000 audit[2605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2546 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643537633464313865323035303461376461306437326236323866 Jan 20 07:01:59.097000 audit: BPF prog-id=102 op=LOAD Jan 20 07:01:59.097000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2546 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630643537633464313865323035303461376461306437326236323866 Jan 20 07:01:59.183373 kubelet[2495]: E0120 07:01:59.183299 2495 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.7.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.7.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 07:01:59.186482 kubelet[2495]: E0120 07:01:59.186423 2495 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.232.7.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.7.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 07:01:59.205794 containerd[1604]: time="2026-01-20T07:01:59.205735757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-7-121,Uid:9b9343c42e5315bb566658a04dd3da6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0d57c4d18e20504a7da0d72b628feb3929884645702839a099a43477a2719b8\"" Jan 20 07:01:59.208836 kubelet[2495]: E0120 07:01:59.208779 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:01:59.215877 containerd[1604]: time="2026-01-20T07:01:59.215809862Z" level=info msg="CreateContainer within sandbox \"f0d57c4d18e20504a7da0d72b628feb3929884645702839a099a43477a2719b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 07:01:59.218553 containerd[1604]: time="2026-01-20T07:01:59.218422833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-7-121,Uid:f910d39d41830cb164800cf2795b1e09,Namespace:kube-system,Attempt:0,} returns sandbox id \"04979801af3609330cc8e66cf78a05ac8abb7d357e493e8c0dcdb13eb66225b4\"" Jan 20 07:01:59.220112 kubelet[2495]: E0120 07:01:59.220076 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:01:59.224857 containerd[1604]: time="2026-01-20T07:01:59.224824976Z" level=info msg="CreateContainer within sandbox \"04979801af3609330cc8e66cf78a05ac8abb7d357e493e8c0dcdb13eb66225b4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 07:01:59.253278 containerd[1604]: time="2026-01-20T07:01:59.253218721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-7-121,Uid:45e5a40cddacbca27ac5f0615d062e27,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ec57586365ecf370867b66a72cb485a89519189ba6819263534edb13737e80a\"" Jan 20 07:01:59.255850 kubelet[2495]: E0120 07:01:59.255719 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:01:59.257144 containerd[1604]: time="2026-01-20T07:01:59.257093102Z" level=info msg="Container 0abb3dc53489a37b1a939314456e7bdde9727e45d750d0bf38f51d8d9298fa85: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:01:59.257392 containerd[1604]: time="2026-01-20T07:01:59.257367233Z" level=info msg="Container 4ba99115f1df5a7158afca147ac311e85d6f3567672ae55b05721488915b9bac: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:01:59.264982 containerd[1604]: time="2026-01-20T07:01:59.264932516Z" level=info msg="CreateContainer within sandbox \"6ec57586365ecf370867b66a72cb485a89519189ba6819263534edb13737e80a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 07:01:59.287860 containerd[1604]: time="2026-01-20T07:01:59.287784108Z" level=info msg="CreateContainer within sandbox \"04979801af3609330cc8e66cf78a05ac8abb7d357e493e8c0dcdb13eb66225b4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0abb3dc53489a37b1a939314456e7bdde9727e45d750d0bf38f51d8d9298fa85\"" Jan 20 07:01:59.289759 containerd[1604]: time="2026-01-20T07:01:59.289531989Z" level=info msg="CreateContainer within sandbox \"f0d57c4d18e20504a7da0d72b628feb3929884645702839a099a43477a2719b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4ba99115f1df5a7158afca147ac311e85d6f3567672ae55b05721488915b9bac\"" Jan 20 07:01:59.290299 containerd[1604]: time="2026-01-20T07:01:59.290261989Z" level=info msg="StartContainer for \"0abb3dc53489a37b1a939314456e7bdde9727e45d750d0bf38f51d8d9298fa85\"" Jan 20 07:01:59.291491 containerd[1604]: time="2026-01-20T07:01:59.291426390Z" level=info msg="StartContainer for \"4ba99115f1df5a7158afca147ac311e85d6f3567672ae55b05721488915b9bac\"" Jan 20 07:01:59.294939 containerd[1604]: time="2026-01-20T07:01:59.294773721Z" level=info msg="connecting to shim 4ba99115f1df5a7158afca147ac311e85d6f3567672ae55b05721488915b9bac" address="unix:///run/containerd/s/187730933c4a76f8ee58d36c11e69c9a23a70da84094b1c0fa7640d8f0cdb5b8" protocol=ttrpc version=3 Jan 20 07:01:59.295248 containerd[1604]: time="2026-01-20T07:01:59.295204771Z" level=info msg="Container a76e591c5105ca409e6cbd4b13538031e8d2d7793985316e148607b2ddacb716: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:01:59.297121 containerd[1604]: time="2026-01-20T07:01:59.297084432Z" level=info msg="connecting to shim 0abb3dc53489a37b1a939314456e7bdde9727e45d750d0bf38f51d8d9298fa85" address="unix:///run/containerd/s/7df22e2f9e800da5ecea605f56562a3b1fa70a04dbecb8ede325c73d2d40e004" protocol=ttrpc version=3 Jan 20 07:01:59.306568 containerd[1604]: time="2026-01-20T07:01:59.306528017Z" level=info msg="CreateContainer within sandbox \"6ec57586365ecf370867b66a72cb485a89519189ba6819263534edb13737e80a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a76e591c5105ca409e6cbd4b13538031e8d2d7793985316e148607b2ddacb716\"" Jan 20 07:01:59.312291 containerd[1604]: time="2026-01-20T07:01:59.312259570Z" level=info msg="StartContainer for \"a76e591c5105ca409e6cbd4b13538031e8d2d7793985316e148607b2ddacb716\"" Jan 20 07:01:59.314029 containerd[1604]: time="2026-01-20T07:01:59.314003711Z" level=info msg="connecting to shim a76e591c5105ca409e6cbd4b13538031e8d2d7793985316e148607b2ddacb716" address="unix:///run/containerd/s/19085507c2e9d519d4a009b008c1427439d96ffa0a83233c8ae4f86e0e9b015c" protocol=ttrpc version=3 Jan 20 07:01:59.337428 systemd[1]: Started cri-containerd-4ba99115f1df5a7158afca147ac311e85d6f3567672ae55b05721488915b9bac.scope - libcontainer container 4ba99115f1df5a7158afca147ac311e85d6f3567672ae55b05721488915b9bac. Jan 20 07:01:59.347648 systemd[1]: Started cri-containerd-0abb3dc53489a37b1a939314456e7bdde9727e45d750d0bf38f51d8d9298fa85.scope - libcontainer container 0abb3dc53489a37b1a939314456e7bdde9727e45d750d0bf38f51d8d9298fa85. Jan 20 07:01:59.361487 systemd[1]: Started cri-containerd-a76e591c5105ca409e6cbd4b13538031e8d2d7793985316e148607b2ddacb716.scope - libcontainer container a76e591c5105ca409e6cbd4b13538031e8d2d7793985316e148607b2ddacb716. Jan 20 07:01:59.387000 audit: BPF prog-id=103 op=LOAD Jan 20 07:01:59.387000 audit: BPF prog-id=104 op=LOAD Jan 20 07:01:59.387000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2556 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366535393163353130356361343039653663626434623133353338 Jan 20 07:01:59.387000 audit: BPF prog-id=104 op=UNLOAD Jan 20 07:01:59.387000 audit[2680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366535393163353130356361343039653663626434623133353338 Jan 20 07:01:59.387000 audit: BPF prog-id=105 op=LOAD Jan 20 07:01:59.387000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2556 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366535393163353130356361343039653663626434623133353338 Jan 20 07:01:59.387000 audit: BPF prog-id=106 op=LOAD Jan 20 07:01:59.387000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2556 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366535393163353130356361343039653663626434623133353338 Jan 20 07:01:59.387000 audit: BPF prog-id=106 op=UNLOAD Jan 20 07:01:59.387000 audit[2680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366535393163353130356361343039653663626434623133353338 Jan 20 07:01:59.387000 audit: BPF prog-id=105 op=UNLOAD Jan 20 07:01:59.387000 audit[2680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366535393163353130356361343039653663626434623133353338 Jan 20 07:01:59.387000 audit: BPF prog-id=107 op=LOAD Jan 20 07:01:59.387000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2556 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366535393163353130356361343039653663626434623133353338 Jan 20 07:01:59.415581 kubelet[2495]: E0120 07:01:59.415540 2495 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-121?timeout=10s\": dial tcp 172.232.7.121:6443: connect: connection refused" interval="1.6s" Jan 20 07:01:59.419000 audit: BPF prog-id=108 op=LOAD Jan 20 07:01:59.420000 audit: BPF prog-id=109 op=LOAD Jan 20 07:01:59.420000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2546 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462613939313135663164663561373135386166636131343761633331 Jan 20 07:01:59.420000 audit: BPF prog-id=109 op=UNLOAD Jan 20 07:01:59.420000 audit[2668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2546 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462613939313135663164663561373135386166636131343761633331 Jan 20 07:01:59.420000 audit: BPF prog-id=110 op=LOAD Jan 20 07:01:59.420000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2546 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462613939313135663164663561373135386166636131343761633331 Jan 20 07:01:59.421000 audit: BPF prog-id=111 op=LOAD Jan 20 07:01:59.421000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2546 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462613939313135663164663561373135386166636131343761633331 Jan 20 07:01:59.421000 audit: BPF prog-id=111 op=UNLOAD Jan 20 07:01:59.421000 audit[2668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2546 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462613939313135663164663561373135386166636131343761633331 Jan 20 07:01:59.421000 audit: BPF prog-id=110 op=UNLOAD Jan 20 07:01:59.421000 audit[2668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2546 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462613939313135663164663561373135386166636131343761633331 Jan 20 07:01:59.421000 audit: BPF prog-id=112 op=LOAD Jan 20 07:01:59.421000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2546 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462613939313135663164663561373135386166636131343761633331 Jan 20 07:01:59.525000 audit: BPF prog-id=113 op=LOAD Jan 20 07:01:59.526000 audit: BPF prog-id=114 op=LOAD Jan 20 07:01:59.526000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2569 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626233646335333438396133376231613933393331343435366537 Jan 20 07:01:59.526000 audit: BPF prog-id=114 op=UNLOAD Jan 20 07:01:59.526000 audit[2669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626233646335333438396133376231613933393331343435366537 Jan 20 07:01:59.527000 audit: BPF prog-id=115 op=LOAD Jan 20 07:01:59.527000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2569 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626233646335333438396133376231613933393331343435366537 Jan 20 07:01:59.527000 audit: BPF prog-id=116 op=LOAD Jan 20 07:01:59.527000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2569 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626233646335333438396133376231613933393331343435366537 Jan 20 07:01:59.527000 audit: BPF prog-id=116 op=UNLOAD Jan 20 07:01:59.527000 audit[2669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626233646335333438396133376231613933393331343435366537 Jan 20 07:01:59.527000 audit: BPF prog-id=115 op=UNLOAD Jan 20 07:01:59.527000 audit[2669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626233646335333438396133376231613933393331343435366537 Jan 20 07:01:59.527000 audit: BPF prog-id=117 op=LOAD Jan 20 07:01:59.527000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2569 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:01:59.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061626233646335333438396133376231613933393331343435366537 Jan 20 07:01:59.634671 containerd[1604]: time="2026-01-20T07:01:59.634475401Z" level=info msg="StartContainer for \"4ba99115f1df5a7158afca147ac311e85d6f3567672ae55b05721488915b9bac\" returns successfully" Jan 20 07:01:59.658268 containerd[1604]: time="2026-01-20T07:01:59.658226543Z" level=info msg="StartContainer for \"0abb3dc53489a37b1a939314456e7bdde9727e45d750d0bf38f51d8d9298fa85\" returns successfully" Jan 20 07:01:59.658268 containerd[1604]: time="2026-01-20T07:01:59.658370273Z" level=info msg="StartContainer for \"a76e591c5105ca409e6cbd4b13538031e8d2d7793985316e148607b2ddacb716\" returns successfully" Jan 20 07:01:59.661005 kubelet[2495]: I0120 07:01:59.660857 2495 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-121" Jan 20 07:01:59.662704 kubelet[2495]: E0120 07:01:59.662592 2495 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.7.121:6443/api/v1/nodes\": dial tcp 172.232.7.121:6443: connect: connection refused" node="172-232-7-121" Jan 20 07:02:00.063348 kubelet[2495]: E0120 07:02:00.063303 2495 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-121\" not found" node="172-232-7-121" Jan 20 07:02:00.063526 kubelet[2495]: E0120 07:02:00.063510 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:00.067500 kubelet[2495]: E0120 07:02:00.067469 2495 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-121\" not found" node="172-232-7-121" Jan 20 07:02:00.067732 kubelet[2495]: E0120 07:02:00.067707 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:00.069907 kubelet[2495]: E0120 07:02:00.069879 2495 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-121\" not found" node="172-232-7-121" Jan 20 07:02:00.070043 kubelet[2495]: E0120 07:02:00.070018 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:01.075815 kubelet[2495]: E0120 07:02:01.075736 2495 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-121\" not found" node="172-232-7-121" Jan 20 07:02:01.078145 kubelet[2495]: E0120 07:02:01.075898 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:01.078145 kubelet[2495]: E0120 07:02:01.076249 2495 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-121\" not found" node="172-232-7-121" Jan 20 07:02:01.078145 kubelet[2495]: E0120 07:02:01.076350 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:01.078145 kubelet[2495]: E0120 07:02:01.076616 2495 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-121\" not found" node="172-232-7-121" Jan 20 07:02:01.078145 kubelet[2495]: E0120 07:02:01.076707 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:01.282594 kubelet[2495]: I0120 07:02:01.282230 2495 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-121" Jan 20 07:02:03.547476 kubelet[2495]: E0120 07:02:03.547376 2495 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-7-121\" not found" node="172-232-7-121" Jan 20 07:02:03.619488 kubelet[2495]: I0120 07:02:03.619413 2495 kubelet_node_status.go:78] "Successfully registered node" node="172-232-7-121" Jan 20 07:02:03.688686 kubelet[2495]: I0120 07:02:03.688596 2495 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-7-121" Jan 20 07:02:03.698658 kubelet[2495]: E0120 07:02:03.698593 2495 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-7-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-7-121" Jan 20 07:02:03.698658 kubelet[2495]: I0120 07:02:03.698649 2495 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:02:03.700539 kubelet[2495]: E0120 07:02:03.700509 2495 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-232-7-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:02:03.700688 kubelet[2495]: I0120 07:02:03.700536 2495 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-7-121" Jan 20 07:02:03.702841 kubelet[2495]: E0120 07:02:03.702755 2495 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-7-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-7-121" Jan 20 07:02:03.959978 kubelet[2495]: I0120 07:02:03.959082 2495 apiserver.go:52] "Watching apiserver" Jan 20 07:02:03.987851 kubelet[2495]: I0120 07:02:03.987769 2495 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 07:02:04.688896 kubelet[2495]: I0120 07:02:04.688527 2495 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-7-121" Jan 20 07:02:04.697800 kubelet[2495]: E0120 07:02:04.697755 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:05.082631 kubelet[2495]: E0120 07:02:05.082591 2495 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:05.784593 systemd[1]: Reload requested from client PID 2770 ('systemctl') (unit session-10.scope)... Jan 20 07:02:05.784647 systemd[1]: Reloading... Jan 20 07:02:06.072743 zram_generator::config[2821]: No configuration found. Jan 20 07:02:06.379816 systemd[1]: Reloading finished in 594 ms. Jan 20 07:02:06.417687 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 07:02:06.445232 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 07:02:06.446388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 07:02:06.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:02:06.446749 systemd[1]: kubelet.service: Consumed 1.453s CPU time, 132.4M memory peak. Jan 20 07:02:06.451232 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 20 07:02:06.451386 kernel: audit: type=1131 audit(1768892526.445:400): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:02:06.454749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 07:02:06.456000 audit: BPF prog-id=118 op=LOAD Jan 20 07:02:06.461203 kernel: audit: type=1334 audit(1768892526.456:401): prog-id=118 op=LOAD Jan 20 07:02:06.456000 audit: BPF prog-id=87 op=UNLOAD Jan 20 07:02:06.459000 audit: BPF prog-id=119 op=LOAD Jan 20 07:02:06.460000 audit: BPF prog-id=67 op=UNLOAD Jan 20 07:02:06.465494 kernel: audit: type=1334 audit(1768892526.456:402): prog-id=87 op=UNLOAD Jan 20 07:02:06.467452 kernel: audit: type=1334 audit(1768892526.459:403): prog-id=119 op=LOAD Jan 20 07:02:06.467514 kernel: audit: type=1334 audit(1768892526.460:404): prog-id=67 op=UNLOAD Jan 20 07:02:06.467563 kernel: audit: type=1334 audit(1768892526.464:405): prog-id=120 op=LOAD Jan 20 07:02:06.467600 kernel: audit: type=1334 audit(1768892526.464:406): prog-id=121 op=LOAD Jan 20 07:02:06.467670 kernel: audit: type=1334 audit(1768892526.464:407): prog-id=75 op=UNLOAD Jan 20 07:02:06.467727 kernel: audit: type=1334 audit(1768892526.464:408): prog-id=76 op=UNLOAD Jan 20 07:02:06.467765 kernel: audit: type=1334 audit(1768892526.466:409): prog-id=122 op=LOAD Jan 20 07:02:06.464000 audit: BPF prog-id=120 op=LOAD Jan 20 07:02:06.464000 audit: BPF prog-id=121 op=LOAD Jan 20 07:02:06.464000 audit: BPF prog-id=75 op=UNLOAD Jan 20 07:02:06.464000 audit: BPF prog-id=76 op=UNLOAD Jan 20 07:02:06.466000 audit: BPF prog-id=122 op=LOAD Jan 20 07:02:06.466000 audit: BPF prog-id=68 op=UNLOAD Jan 20 07:02:06.466000 audit: BPF prog-id=123 op=LOAD Jan 20 07:02:06.466000 audit: BPF prog-id=124 op=LOAD Jan 20 07:02:06.466000 audit: BPF prog-id=69 op=UNLOAD Jan 20 07:02:06.466000 audit: BPF prog-id=70 op=UNLOAD Jan 20 07:02:06.467000 audit: BPF prog-id=125 op=LOAD Jan 20 07:02:06.467000 audit: BPF prog-id=77 op=UNLOAD Jan 20 07:02:06.467000 audit: BPF prog-id=126 op=LOAD Jan 20 07:02:06.467000 audit: BPF prog-id=127 op=LOAD Jan 20 07:02:06.467000 audit: BPF prog-id=78 op=UNLOAD Jan 20 07:02:06.467000 audit: BPF prog-id=79 op=UNLOAD Jan 20 07:02:06.468000 audit: BPF prog-id=128 op=LOAD Jan 20 07:02:06.468000 audit: BPF prog-id=83 op=UNLOAD Jan 20 07:02:06.469000 audit: BPF prog-id=129 op=LOAD Jan 20 07:02:06.469000 audit: BPF prog-id=130 op=LOAD Jan 20 07:02:06.469000 audit: BPF prog-id=84 op=UNLOAD Jan 20 07:02:06.469000 audit: BPF prog-id=85 op=UNLOAD Jan 20 07:02:06.471000 audit: BPF prog-id=131 op=LOAD Jan 20 07:02:06.471000 audit: BPF prog-id=72 op=UNLOAD Jan 20 07:02:06.471000 audit: BPF prog-id=132 op=LOAD Jan 20 07:02:06.471000 audit: BPF prog-id=133 op=LOAD Jan 20 07:02:06.471000 audit: BPF prog-id=73 op=UNLOAD Jan 20 07:02:06.471000 audit: BPF prog-id=74 op=UNLOAD Jan 20 07:02:06.473000 audit: BPF prog-id=134 op=LOAD Jan 20 07:02:06.473000 audit: BPF prog-id=80 op=UNLOAD Jan 20 07:02:06.473000 audit: BPF prog-id=135 op=LOAD Jan 20 07:02:06.473000 audit: BPF prog-id=136 op=LOAD Jan 20 07:02:06.473000 audit: BPF prog-id=81 op=UNLOAD Jan 20 07:02:06.473000 audit: BPF prog-id=82 op=UNLOAD Jan 20 07:02:06.474000 audit: BPF prog-id=137 op=LOAD Jan 20 07:02:06.475000 audit: BPF prog-id=71 op=UNLOAD Jan 20 07:02:06.479000 audit: BPF prog-id=138 op=LOAD Jan 20 07:02:06.479000 audit: BPF prog-id=86 op=UNLOAD Jan 20 07:02:06.941746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 07:02:06.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:02:06.955443 (kubelet)[2867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 07:02:07.044730 kubelet[2867]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 07:02:07.044730 kubelet[2867]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 07:02:07.044730 kubelet[2867]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 07:02:07.045285 kubelet[2867]: I0120 07:02:07.044826 2867 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 07:02:07.061165 kubelet[2867]: I0120 07:02:07.061115 2867 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 07:02:07.061165 kubelet[2867]: I0120 07:02:07.061150 2867 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 07:02:07.062566 kubelet[2867]: I0120 07:02:07.061416 2867 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 07:02:07.062873 kubelet[2867]: I0120 07:02:07.062838 2867 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 07:02:07.066159 kubelet[2867]: I0120 07:02:07.066123 2867 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 07:02:07.076958 kubelet[2867]: I0120 07:02:07.076924 2867 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 07:02:07.085615 kubelet[2867]: I0120 07:02:07.085560 2867 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 07:02:07.086255 kubelet[2867]: I0120 07:02:07.086193 2867 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 07:02:07.087053 kubelet[2867]: I0120 07:02:07.086222 2867 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-7-121","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 07:02:07.087053 kubelet[2867]: I0120 07:02:07.086999 2867 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 07:02:07.087769 kubelet[2867]: I0120 07:02:07.087266 2867 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 07:02:07.088004 kubelet[2867]: I0120 07:02:07.087985 2867 state_mem.go:36] "Initialized new in-memory state store" Jan 20 07:02:07.090557 kubelet[2867]: I0120 07:02:07.090538 2867 kubelet.go:480] "Attempting to sync node with API server" Jan 20 07:02:07.090643 kubelet[2867]: I0120 07:02:07.090630 2867 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 07:02:07.090740 kubelet[2867]: I0120 07:02:07.090726 2867 kubelet.go:386] "Adding apiserver pod source" Jan 20 07:02:07.090846 kubelet[2867]: I0120 07:02:07.090833 2867 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 07:02:07.098942 kubelet[2867]: I0120 07:02:07.098818 2867 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 07:02:07.101935 kubelet[2867]: I0120 07:02:07.099823 2867 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 07:02:07.115128 kubelet[2867]: I0120 07:02:07.114661 2867 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 07:02:07.119631 kubelet[2867]: I0120 07:02:07.119598 2867 server.go:1289] "Started kubelet" Jan 20 07:02:07.125813 kubelet[2867]: I0120 07:02:07.124776 2867 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 07:02:07.127566 kubelet[2867]: I0120 07:02:07.125981 2867 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 07:02:07.132776 kubelet[2867]: I0120 07:02:07.132744 2867 server.go:317] "Adding debug handlers to kubelet server" Jan 20 07:02:07.136001 kubelet[2867]: I0120 07:02:07.135983 2867 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 07:02:07.138132 kubelet[2867]: I0120 07:02:07.138084 2867 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 07:02:07.140079 kubelet[2867]: I0120 07:02:07.140046 2867 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 07:02:07.142360 kubelet[2867]: E0120 07:02:07.142337 2867 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 07:02:07.143931 kubelet[2867]: I0120 07:02:07.143914 2867 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 07:02:07.146058 kubelet[2867]: I0120 07:02:07.146041 2867 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 07:02:07.146538 kubelet[2867]: I0120 07:02:07.146494 2867 reconciler.go:26] "Reconciler: start to sync state" Jan 20 07:02:07.147641 kubelet[2867]: I0120 07:02:07.147612 2867 factory.go:223] Registration of the systemd container factory successfully Jan 20 07:02:07.147924 kubelet[2867]: I0120 07:02:07.147904 2867 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 07:02:07.170162 kubelet[2867]: I0120 07:02:07.170122 2867 factory.go:223] Registration of the containerd container factory successfully Jan 20 07:02:07.238783 kubelet[2867]: I0120 07:02:07.237563 2867 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 07:02:07.243884 kubelet[2867]: I0120 07:02:07.243855 2867 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 07:02:07.244153 kubelet[2867]: I0120 07:02:07.244136 2867 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 07:02:07.244305 kubelet[2867]: I0120 07:02:07.244288 2867 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 07:02:07.244529 kubelet[2867]: I0120 07:02:07.244446 2867 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 07:02:07.244736 kubelet[2867]: E0120 07:02:07.244677 2867 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 07:02:07.330748 kubelet[2867]: I0120 07:02:07.326579 2867 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 07:02:07.330748 kubelet[2867]: I0120 07:02:07.326644 2867 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 07:02:07.330748 kubelet[2867]: I0120 07:02:07.326721 2867 state_mem.go:36] "Initialized new in-memory state store" Jan 20 07:02:07.330748 kubelet[2867]: I0120 07:02:07.327170 2867 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 07:02:07.330748 kubelet[2867]: I0120 07:02:07.327223 2867 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 07:02:07.330748 kubelet[2867]: I0120 07:02:07.327291 2867 policy_none.go:49] "None policy: Start" Jan 20 07:02:07.330748 kubelet[2867]: I0120 07:02:07.327338 2867 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 07:02:07.330748 kubelet[2867]: I0120 07:02:07.327435 2867 state_mem.go:35] "Initializing new in-memory state store" Jan 20 07:02:07.330748 kubelet[2867]: I0120 07:02:07.327758 2867 state_mem.go:75] "Updated machine memory state" Jan 20 07:02:07.339799 kubelet[2867]: E0120 07:02:07.338580 2867 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 07:02:07.339799 kubelet[2867]: I0120 07:02:07.339008 2867 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 07:02:07.339799 kubelet[2867]: I0120 07:02:07.339025 2867 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 07:02:07.341017 kubelet[2867]: I0120 07:02:07.340986 2867 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 07:02:07.373404 kubelet[2867]: I0120 07:02:07.371746 2867 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-7-121" Jan 20 07:02:07.379941 kubelet[2867]: I0120 07:02:07.379897 2867 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-7-121" Jan 20 07:02:07.381371 kubelet[2867]: I0120 07:02:07.381338 2867 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:02:07.383415 kubelet[2867]: E0120 07:02:07.383108 2867 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 07:02:07.397956 kubelet[2867]: E0120 07:02:07.397923 2867 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-7-121\" already exists" pod="kube-system/kube-apiserver-172-232-7-121" Jan 20 07:02:07.459111 kubelet[2867]: I0120 07:02:07.458649 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f910d39d41830cb164800cf2795b1e09-flexvolume-dir\") pod \"kube-controller-manager-172-232-7-121\" (UID: \"f910d39d41830cb164800cf2795b1e09\") " pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:02:07.459741 kubelet[2867]: I0120 07:02:07.459717 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f910d39d41830cb164800cf2795b1e09-k8s-certs\") pod \"kube-controller-manager-172-232-7-121\" (UID: \"f910d39d41830cb164800cf2795b1e09\") " pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:02:07.460315 kubelet[2867]: I0120 07:02:07.460254 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f910d39d41830cb164800cf2795b1e09-kubeconfig\") pod \"kube-controller-manager-172-232-7-121\" (UID: \"f910d39d41830cb164800cf2795b1e09\") " pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:02:07.460637 kubelet[2867]: I0120 07:02:07.460598 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f910d39d41830cb164800cf2795b1e09-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-7-121\" (UID: \"f910d39d41830cb164800cf2795b1e09\") " pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:02:07.461004 kubelet[2867]: I0120 07:02:07.460985 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b9343c42e5315bb566658a04dd3da6a-ca-certs\") pod \"kube-apiserver-172-232-7-121\" (UID: \"9b9343c42e5315bb566658a04dd3da6a\") " pod="kube-system/kube-apiserver-172-232-7-121" Jan 20 07:02:07.461682 kubelet[2867]: I0120 07:02:07.461655 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f910d39d41830cb164800cf2795b1e09-ca-certs\") pod \"kube-controller-manager-172-232-7-121\" (UID: \"f910d39d41830cb164800cf2795b1e09\") " pod="kube-system/kube-controller-manager-172-232-7-121" Jan 20 07:02:07.486663 kubelet[2867]: I0120 07:02:07.486321 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45e5a40cddacbca27ac5f0615d062e27-kubeconfig\") pod \"kube-scheduler-172-232-7-121\" (UID: \"45e5a40cddacbca27ac5f0615d062e27\") " pod="kube-system/kube-scheduler-172-232-7-121" Jan 20 07:02:07.487337 kubelet[2867]: I0120 07:02:07.487228 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b9343c42e5315bb566658a04dd3da6a-k8s-certs\") pod \"kube-apiserver-172-232-7-121\" (UID: \"9b9343c42e5315bb566658a04dd3da6a\") " pod="kube-system/kube-apiserver-172-232-7-121" Jan 20 07:02:07.487895 kubelet[2867]: I0120 07:02:07.487787 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b9343c42e5315bb566658a04dd3da6a-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-7-121\" (UID: \"9b9343c42e5315bb566658a04dd3da6a\") " pod="kube-system/kube-apiserver-172-232-7-121" Jan 20 07:02:07.537055 kubelet[2867]: I0120 07:02:07.536936 2867 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-121" Jan 20 07:02:07.583114 kubelet[2867]: I0120 07:02:07.583063 2867 kubelet_node_status.go:124] "Node was previously registered" node="172-232-7-121" Jan 20 07:02:07.583476 kubelet[2867]: I0120 07:02:07.583454 2867 kubelet_node_status.go:78] "Successfully registered node" node="172-232-7-121" Jan 20 07:02:07.693163 kubelet[2867]: E0120 07:02:07.692914 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:07.702560 kubelet[2867]: E0120 07:02:07.702519 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:07.719235 kubelet[2867]: E0120 07:02:07.719131 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:08.094689 kubelet[2867]: I0120 07:02:08.094451 2867 apiserver.go:52] "Watching apiserver" Jan 20 07:02:08.165398 kubelet[2867]: I0120 07:02:08.165350 2867 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 07:02:08.238320 kubelet[2867]: I0120 07:02:08.238020 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-7-121" podStartSLOduration=1.23797244 podStartE2EDuration="1.23797244s" podCreationTimestamp="2026-01-20 07:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 07:02:08.218136817 +0000 UTC m=+1.248810352" watchObservedRunningTime="2026-01-20 07:02:08.23797244 +0000 UTC m=+1.268645975" Jan 20 07:02:08.253198 kubelet[2867]: I0120 07:02:08.253088 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-7-121" podStartSLOduration=4.253068896 podStartE2EDuration="4.253068896s" podCreationTimestamp="2026-01-20 07:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 07:02:08.239750786 +0000 UTC m=+1.270424331" watchObservedRunningTime="2026-01-20 07:02:08.253068896 +0000 UTC m=+1.283742431" Jan 20 07:02:08.265090 kubelet[2867]: I0120 07:02:08.264926 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-7-121" podStartSLOduration=1.2649122369999999 podStartE2EDuration="1.264912237s" podCreationTimestamp="2026-01-20 07:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 07:02:08.254510426 +0000 UTC m=+1.285183961" watchObservedRunningTime="2026-01-20 07:02:08.264912237 +0000 UTC m=+1.295585772" Jan 20 07:02:08.290244 kubelet[2867]: E0120 07:02:08.289627 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:08.294216 kubelet[2867]: E0120 07:02:08.292252 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:08.294216 kubelet[2867]: E0120 07:02:08.292526 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:09.294546 kubelet[2867]: E0120 07:02:09.293354 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:09.294546 kubelet[2867]: E0120 07:02:09.293617 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:10.293824 kubelet[2867]: E0120 07:02:10.293778 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:11.451433 kubelet[2867]: I0120 07:02:11.451307 2867 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 07:02:11.454111 containerd[1604]: time="2026-01-20T07:02:11.453965311Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 07:02:11.455250 kubelet[2867]: I0120 07:02:11.455220 2867 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 07:02:12.010438 systemd[1]: Created slice kubepods-besteffort-pod88310b31_e4a4_4055_95fd_fe8dfa9f815d.slice - libcontainer container kubepods-besteffort-pod88310b31_e4a4_4055_95fd_fe8dfa9f815d.slice. Jan 20 07:02:12.021230 kubelet[2867]: I0120 07:02:12.019772 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/88310b31-e4a4-4055-95fd-fe8dfa9f815d-kube-proxy\") pod \"kube-proxy-vpws8\" (UID: \"88310b31-e4a4-4055-95fd-fe8dfa9f815d\") " pod="kube-system/kube-proxy-vpws8" Jan 20 07:02:12.021230 kubelet[2867]: I0120 07:02:12.019838 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdrh9\" (UniqueName: \"kubernetes.io/projected/88310b31-e4a4-4055-95fd-fe8dfa9f815d-kube-api-access-mdrh9\") pod \"kube-proxy-vpws8\" (UID: \"88310b31-e4a4-4055-95fd-fe8dfa9f815d\") " pod="kube-system/kube-proxy-vpws8" Jan 20 07:02:12.021230 kubelet[2867]: I0120 07:02:12.019879 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88310b31-e4a4-4055-95fd-fe8dfa9f815d-xtables-lock\") pod \"kube-proxy-vpws8\" (UID: \"88310b31-e4a4-4055-95fd-fe8dfa9f815d\") " pod="kube-system/kube-proxy-vpws8" Jan 20 07:02:12.021230 kubelet[2867]: I0120 07:02:12.019904 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88310b31-e4a4-4055-95fd-fe8dfa9f815d-lib-modules\") pod \"kube-proxy-vpws8\" (UID: \"88310b31-e4a4-4055-95fd-fe8dfa9f815d\") " pod="kube-system/kube-proxy-vpws8" Jan 20 07:02:12.127616 kubelet[2867]: E0120 07:02:12.127489 2867 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 20 07:02:12.127616 kubelet[2867]: E0120 07:02:12.127545 2867 projected.go:194] Error preparing data for projected volume kube-api-access-mdrh9 for pod kube-system/kube-proxy-vpws8: configmap "kube-root-ca.crt" not found Jan 20 07:02:12.127898 kubelet[2867]: E0120 07:02:12.127876 2867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88310b31-e4a4-4055-95fd-fe8dfa9f815d-kube-api-access-mdrh9 podName:88310b31-e4a4-4055-95fd-fe8dfa9f815d nodeName:}" failed. No retries permitted until 2026-01-20 07:02:12.627820757 +0000 UTC m=+5.658494292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mdrh9" (UniqueName: "kubernetes.io/projected/88310b31-e4a4-4055-95fd-fe8dfa9f815d-kube-api-access-mdrh9") pod "kube-proxy-vpws8" (UID: "88310b31-e4a4-4055-95fd-fe8dfa9f815d") : configmap "kube-root-ca.crt" not found Jan 20 07:02:12.615368 systemd[1]: Created slice kubepods-besteffort-pod7283770d_cfc1_4766_a90a_e6012cecbe8e.slice - libcontainer container kubepods-besteffort-pod7283770d_cfc1_4766_a90a_e6012cecbe8e.slice. Jan 20 07:02:12.623841 kubelet[2867]: I0120 07:02:12.623776 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7283770d-cfc1-4766-a90a-e6012cecbe8e-var-lib-calico\") pod \"tigera-operator-7dcd859c48-9jxlt\" (UID: \"7283770d-cfc1-4766-a90a-e6012cecbe8e\") " pod="tigera-operator/tigera-operator-7dcd859c48-9jxlt" Jan 20 07:02:12.624491 kubelet[2867]: I0120 07:02:12.624123 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9f4p\" (UniqueName: \"kubernetes.io/projected/7283770d-cfc1-4766-a90a-e6012cecbe8e-kube-api-access-n9f4p\") pod \"tigera-operator-7dcd859c48-9jxlt\" (UID: \"7283770d-cfc1-4766-a90a-e6012cecbe8e\") " pod="tigera-operator/tigera-operator-7dcd859c48-9jxlt" Jan 20 07:02:12.923090 containerd[1604]: time="2026-01-20T07:02:12.922926221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9jxlt,Uid:7283770d-cfc1-4766-a90a-e6012cecbe8e,Namespace:tigera-operator,Attempt:0,}" Jan 20 07:02:12.924383 containerd[1604]: time="2026-01-20T07:02:12.924058680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vpws8,Uid:88310b31-e4a4-4055-95fd-fe8dfa9f815d,Namespace:kube-system,Attempt:0,}" Jan 20 07:02:12.924427 kubelet[2867]: E0120 07:02:12.923547 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:12.998236 containerd[1604]: time="2026-01-20T07:02:12.989774548Z" level=info msg="connecting to shim dd72f442227c245c59464d712b9f9af656c5c3e41e87e20215871a3862e1b766" address="unix:///run/containerd/s/7d6545e757bf6e35865d41969b6bbbc6cd71a601ea33cdea7db23f892569cc4d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:02:13.005683 containerd[1604]: time="2026-01-20T07:02:13.005617784Z" level=info msg="connecting to shim 9877cd66937b6c972803ab7d371a54b9ffd37b0961cb0f3e3aae6ad7fe21e337" address="unix:///run/containerd/s/1f6f6a1161ebd449c4ecd75753c8c1639ca6204838f7b32178b547292ce834d5" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:02:13.151437 systemd[1]: Started cri-containerd-9877cd66937b6c972803ab7d371a54b9ffd37b0961cb0f3e3aae6ad7fe21e337.scope - libcontainer container 9877cd66937b6c972803ab7d371a54b9ffd37b0961cb0f3e3aae6ad7fe21e337. Jan 20 07:02:13.238734 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 20 07:02:13.238979 kernel: audit: type=1334 audit(1768892533.232:444): prog-id=139 op=LOAD Jan 20 07:02:13.239072 kernel: audit: type=1334 audit(1768892533.233:445): prog-id=140 op=LOAD Jan 20 07:02:13.232000 audit: BPF prog-id=139 op=LOAD Jan 20 07:02:13.233000 audit: BPF prog-id=140 op=LOAD Jan 20 07:02:13.248244 kernel: audit: type=1300 audit(1768892533.233:445): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2937 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.233000 audit[2947]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2937 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938373763643636393337623663393732383033616237643337316135 Jan 20 07:02:13.233000 audit: BPF prog-id=140 op=UNLOAD Jan 20 07:02:13.260781 kernel: audit: type=1327 audit(1768892533.233:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938373763643636393337623663393732383033616237643337316135 Jan 20 07:02:13.261079 kernel: audit: type=1334 audit(1768892533.233:446): prog-id=140 op=UNLOAD Jan 20 07:02:13.261141 kernel: audit: type=1300 audit(1768892533.233:446): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2937 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.233000 audit[2947]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2937 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.270054 kernel: audit: type=1327 audit(1768892533.233:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938373763643636393337623663393732383033616237643337316135 Jan 20 07:02:13.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938373763643636393337623663393732383033616237643337316135 Jan 20 07:02:13.269140 systemd[1]: Started cri-containerd-dd72f442227c245c59464d712b9f9af656c5c3e41e87e20215871a3862e1b766.scope - libcontainer container dd72f442227c245c59464d712b9f9af656c5c3e41e87e20215871a3862e1b766. Jan 20 07:02:13.233000 audit: BPF prog-id=141 op=LOAD Jan 20 07:02:13.282274 kernel: audit: type=1334 audit(1768892533.233:447): prog-id=141 op=LOAD Jan 20 07:02:13.233000 audit[2947]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2937 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.288329 containerd[1604]: time="2026-01-20T07:02:13.287497929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vpws8,Uid:88310b31-e4a4-4055-95fd-fe8dfa9f815d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9877cd66937b6c972803ab7d371a54b9ffd37b0961cb0f3e3aae6ad7fe21e337\"" Jan 20 07:02:13.288990 kubelet[2867]: E0120 07:02:13.288965 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:13.298205 kernel: audit: type=1300 audit(1768892533.233:447): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2937 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.298328 kernel: audit: type=1327 audit(1768892533.233:447): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938373763643636393337623663393732383033616237643337316135 Jan 20 07:02:13.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938373763643636393337623663393732383033616237643337316135 Jan 20 07:02:13.233000 audit: BPF prog-id=142 op=LOAD Jan 20 07:02:13.233000 audit[2947]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2937 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938373763643636393337623663393732383033616237643337316135 Jan 20 07:02:13.233000 audit: BPF prog-id=142 op=UNLOAD Jan 20 07:02:13.233000 audit[2947]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2937 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938373763643636393337623663393732383033616237643337316135 Jan 20 07:02:13.233000 audit: BPF prog-id=141 op=UNLOAD Jan 20 07:02:13.233000 audit[2947]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2937 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938373763643636393337623663393732383033616237643337316135 Jan 20 07:02:13.233000 audit: BPF prog-id=143 op=LOAD Jan 20 07:02:13.233000 audit[2947]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2937 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938373763643636393337623663393732383033616237643337316135 Jan 20 07:02:13.304794 containerd[1604]: time="2026-01-20T07:02:13.304638983Z" level=info msg="CreateContainer within sandbox \"9877cd66937b6c972803ab7d371a54b9ffd37b0961cb0f3e3aae6ad7fe21e337\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 07:02:13.321000 audit: BPF prog-id=144 op=LOAD Jan 20 07:02:13.322000 audit: BPF prog-id=145 op=LOAD Jan 20 07:02:13.322000 audit[2967]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2931 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464373266343432323237633234356335393436346437313262396639 Jan 20 07:02:13.322000 audit: BPF prog-id=145 op=UNLOAD Jan 20 07:02:13.322000 audit[2967]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464373266343432323237633234356335393436346437313262396639 Jan 20 07:02:13.322000 audit: BPF prog-id=146 op=LOAD Jan 20 07:02:13.322000 audit[2967]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2931 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464373266343432323237633234356335393436346437313262396639 Jan 20 07:02:13.322000 audit: BPF prog-id=147 op=LOAD Jan 20 07:02:13.322000 audit[2967]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2931 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464373266343432323237633234356335393436346437313262396639 Jan 20 07:02:13.322000 audit: BPF prog-id=147 op=UNLOAD Jan 20 07:02:13.322000 audit[2967]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464373266343432323237633234356335393436346437313262396639 Jan 20 07:02:13.322000 audit: BPF prog-id=146 op=UNLOAD Jan 20 07:02:13.322000 audit[2967]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464373266343432323237633234356335393436346437313262396639 Jan 20 07:02:13.322000 audit: BPF prog-id=148 op=LOAD Jan 20 07:02:13.322000 audit[2967]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2931 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464373266343432323237633234356335393436346437313262396639 Jan 20 07:02:13.331602 containerd[1604]: time="2026-01-20T07:02:13.331489842Z" level=info msg="Container f7cd8d91255d80c4303fae2ebe66a5010c5ff42c13af1298022009b88395fe50: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:02:13.341336 containerd[1604]: time="2026-01-20T07:02:13.341287748Z" level=info msg="CreateContainer within sandbox \"9877cd66937b6c972803ab7d371a54b9ffd37b0961cb0f3e3aae6ad7fe21e337\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f7cd8d91255d80c4303fae2ebe66a5010c5ff42c13af1298022009b88395fe50\"" Jan 20 07:02:13.342976 containerd[1604]: time="2026-01-20T07:02:13.342353043Z" level=info msg="StartContainer for \"f7cd8d91255d80c4303fae2ebe66a5010c5ff42c13af1298022009b88395fe50\"" Jan 20 07:02:13.344555 containerd[1604]: time="2026-01-20T07:02:13.344466914Z" level=info msg="connecting to shim f7cd8d91255d80c4303fae2ebe66a5010c5ff42c13af1298022009b88395fe50" address="unix:///run/containerd/s/1f6f6a1161ebd449c4ecd75753c8c1639ca6204838f7b32178b547292ce834d5" protocol=ttrpc version=3 Jan 20 07:02:13.404209 containerd[1604]: time="2026-01-20T07:02:13.403705184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9jxlt,Uid:7283770d-cfc1-4766-a90a-e6012cecbe8e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dd72f442227c245c59464d712b9f9af656c5c3e41e87e20215871a3862e1b766\"" Jan 20 07:02:13.415670 containerd[1604]: time="2026-01-20T07:02:13.415621871Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 07:02:13.417548 systemd[1]: Started cri-containerd-f7cd8d91255d80c4303fae2ebe66a5010c5ff42c13af1298022009b88395fe50.scope - libcontainer container f7cd8d91255d80c4303fae2ebe66a5010c5ff42c13af1298022009b88395fe50. Jan 20 07:02:13.502000 audit: BPF prog-id=149 op=LOAD Jan 20 07:02:13.502000 audit[3000]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2937 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636438643931323535643830633433303366616532656265363661 Jan 20 07:02:13.502000 audit: BPF prog-id=150 op=LOAD Jan 20 07:02:13.502000 audit[3000]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2937 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636438643931323535643830633433303366616532656265363661 Jan 20 07:02:13.502000 audit: BPF prog-id=150 op=UNLOAD Jan 20 07:02:13.502000 audit[3000]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2937 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636438643931323535643830633433303366616532656265363661 Jan 20 07:02:13.502000 audit: BPF prog-id=149 op=UNLOAD Jan 20 07:02:13.502000 audit[3000]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2937 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636438643931323535643830633433303366616532656265363661 Jan 20 07:02:13.502000 audit: BPF prog-id=151 op=LOAD Jan 20 07:02:13.502000 audit[3000]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2937 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637636438643931323535643830633433303366616532656265363661 Jan 20 07:02:13.558449 containerd[1604]: time="2026-01-20T07:02:13.558339230Z" level=info msg="StartContainer for \"f7cd8d91255d80c4303fae2ebe66a5010c5ff42c13af1298022009b88395fe50\" returns successfully" Jan 20 07:02:13.834000 audit[3071]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.834000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff584dd620 a2=0 a3=7fff584dd60c items=0 ppid=3020 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.834000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 07:02:13.836000 audit[3072]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:13.836000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc20a750a0 a2=0 a3=ac87343efeda0dc4 items=0 ppid=3020 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.836000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 07:02:13.838000 audit[3073]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:13.838000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee7862a60 a2=0 a3=7ffee7862a4c items=0 ppid=3020 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.838000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 07:02:13.841000 audit[3075]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.841000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde1e4db80 a2=0 a3=7ffde1e4db6c items=0 ppid=3020 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.841000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 07:02:13.844000 audit[3076]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:13.844000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb90f0320 a2=0 a3=7ffdb90f030c items=0 ppid=3020 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 07:02:13.851000 audit[3077]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.851000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd40704a70 a2=0 a3=7ffd40704a5c items=0 ppid=3020 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 07:02:13.948000 audit[3080]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.948000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe890273a0 a2=0 a3=7ffe8902738c items=0 ppid=3020 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.948000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 07:02:13.953000 audit[3082]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.953000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffee01450b0 a2=0 a3=7ffee014509c items=0 ppid=3020 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 20 07:02:13.960000 audit[3085]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.960000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff5df57520 a2=0 a3=7fff5df5750c items=0 ppid=3020 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.960000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 20 07:02:13.961000 audit[3086]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.961000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff35e39680 a2=0 a3=7fff35e3966c items=0 ppid=3020 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.961000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 07:02:13.965000 audit[3088]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.965000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff0d7f13c0 a2=0 a3=7fff0d7f13ac items=0 ppid=3020 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.965000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 07:02:13.968000 audit[3089]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.968000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed30db030 a2=0 a3=7ffed30db01c items=0 ppid=3020 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 07:02:13.972000 audit[3091]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.972000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff302c7800 a2=0 a3=7fff302c77ec items=0 ppid=3020 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.972000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 07:02:13.977000 audit[3094]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.977000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe0360cdf0 a2=0 a3=7ffe0360cddc items=0 ppid=3020 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.977000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 20 07:02:13.979000 audit[3095]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.979000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff92b677e0 a2=0 a3=7fff92b677cc items=0 ppid=3020 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.979000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 07:02:13.983000 audit[3097]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.983000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff45b2c310 a2=0 a3=7fff45b2c2fc items=0 ppid=3020 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 07:02:13.985000 audit[3098]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.985000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffee52ac90 a2=0 a3=7fffee52ac7c items=0 ppid=3020 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.985000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 07:02:13.989000 audit[3100]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.989000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe31d6900 a2=0 a3=7fffe31d68ec items=0 ppid=3020 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 07:02:13.995000 audit[3103]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:13.995000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd19f67a70 a2=0 a3=7ffd19f67a5c items=0 ppid=3020 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:13.995000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 07:02:14.001000 audit[3106]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:14.001000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffebba6ecd0 a2=0 a3=7ffebba6ecbc items=0 ppid=3020 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.001000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 07:02:14.003000 audit[3107]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:14.003000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe5ab009a0 a2=0 a3=7ffe5ab0098c items=0 ppid=3020 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.003000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 07:02:14.007000 audit[3109]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:14.007000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd3814fca0 a2=0 a3=7ffd3814fc8c items=0 ppid=3020 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.007000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 07:02:14.013000 audit[3112]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:14.013000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd466331c0 a2=0 a3=7ffd466331ac items=0 ppid=3020 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 07:02:14.015000 audit[3113]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:14.015000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9dcb8790 a2=0 a3=7ffe9dcb877c items=0 ppid=3020 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.015000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 07:02:14.019000 audit[3115]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 07:02:14.019000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffef1168c30 a2=0 a3=7ffef1168c1c items=0 ppid=3020 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 07:02:14.054000 audit[3121]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:14.054000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd57bc1370 a2=0 a3=7ffd57bc135c items=0 ppid=3020 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:14.060000 audit[3121]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:14.060000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd57bc1370 a2=0 a3=7ffd57bc135c items=0 ppid=3020 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:14.063000 audit[3126]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.063000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffcc0b1c00 a2=0 a3=7fffcc0b1bec items=0 ppid=3020 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 07:02:14.067000 audit[3128]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.067000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcde3bdb90 a2=0 a3=7ffcde3bdb7c items=0 ppid=3020 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 20 07:02:14.076000 audit[3131]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.076000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe2c136330 a2=0 a3=7ffe2c13631c items=0 ppid=3020 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.076000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 20 07:02:14.078000 audit[3132]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.078000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8922a270 a2=0 a3=7ffe8922a25c items=0 ppid=3020 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 07:02:14.082000 audit[3134]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.082000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe9770d070 a2=0 a3=7ffe9770d05c items=0 ppid=3020 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.082000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 07:02:14.085000 audit[3135]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.085000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe983a29b0 a2=0 a3=7ffe983a299c items=0 ppid=3020 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.085000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 07:02:14.091000 audit[3137]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.091000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeabfd2b70 a2=0 a3=7ffeabfd2b5c items=0 ppid=3020 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.091000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 20 07:02:14.100000 audit[3140]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.100000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe71688190 a2=0 a3=7ffe7168817c items=0 ppid=3020 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.100000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 07:02:14.102000 audit[3141]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.102000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd819f2b60 a2=0 a3=7ffd819f2b4c items=0 ppid=3020 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 07:02:14.106000 audit[3143]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.106000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe01ebed30 a2=0 a3=7ffe01ebed1c items=0 ppid=3020 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.106000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 07:02:14.109000 audit[3144]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.109000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff060c5180 a2=0 a3=7fff060c516c items=0 ppid=3020 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.109000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 07:02:14.113000 audit[3146]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.113000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf65ff3b0 a2=0 a3=7ffcf65ff39c items=0 ppid=3020 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.113000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 07:02:14.118000 audit[3149]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.118000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffca6f18db0 a2=0 a3=7ffca6f18d9c items=0 ppid=3020 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.118000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 07:02:14.125000 audit[3152]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.125000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff756c9c00 a2=0 a3=7fff756c9bec items=0 ppid=3020 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.125000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 20 07:02:14.126000 audit[3153]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.126000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0998fa50 a2=0 a3=7ffc0998fa3c items=0 ppid=3020 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.126000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 07:02:14.131000 audit[3155]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.131000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff1a5e3150 a2=0 a3=7fff1a5e313c items=0 ppid=3020 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.131000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 07:02:14.141000 audit[3158]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.141000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe97363d60 a2=0 a3=7ffe97363d4c items=0 ppid=3020 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 07:02:14.143000 audit[3159]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.143000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb1c11410 a2=0 a3=7ffcb1c113fc items=0 ppid=3020 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.143000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 07:02:14.146000 audit[3161]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.146000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffccf920f10 a2=0 a3=7ffccf920efc items=0 ppid=3020 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 07:02:14.148000 audit[3162]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.148000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa2e99330 a2=0 a3=7fffa2e9931c items=0 ppid=3020 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 07:02:14.152000 audit[3164]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.152000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd3c3be600 a2=0 a3=7ffd3c3be5ec items=0 ppid=3020 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 07:02:14.159000 audit[3167]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 07:02:14.159000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe81a9a900 a2=0 a3=7ffe81a9a8ec items=0 ppid=3020 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.159000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 07:02:14.166000 audit[3169]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 07:02:14.166000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff4473cf00 a2=0 a3=7fff4473ceec items=0 ppid=3020 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.166000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:14.166000 audit[3169]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 07:02:14.166000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff4473cf00 a2=0 a3=7fff4473ceec items=0 ppid=3020 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:14.166000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:14.326977 kubelet[2867]: E0120 07:02:14.325433 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:14.342938 kubelet[2867]: I0120 07:02:14.341255 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vpws8" podStartSLOduration=3.34003485 podStartE2EDuration="3.34003485s" podCreationTimestamp="2026-01-20 07:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 07:02:14.339245649 +0000 UTC m=+7.369919204" watchObservedRunningTime="2026-01-20 07:02:14.34003485 +0000 UTC m=+7.370708385" Jan 20 07:02:14.425757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252920817.mount: Deactivated successfully. Jan 20 07:02:15.058215 kubelet[2867]: E0120 07:02:15.057812 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:15.343755 kubelet[2867]: E0120 07:02:15.328928 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:15.343755 kubelet[2867]: E0120 07:02:15.329794 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:16.070268 kubelet[2867]: E0120 07:02:16.070147 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:16.319602 containerd[1604]: time="2026-01-20T07:02:16.319368742Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:16.321210 containerd[1604]: time="2026-01-20T07:02:16.320603407Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Jan 20 07:02:16.323468 containerd[1604]: time="2026-01-20T07:02:16.323408522Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:16.330872 containerd[1604]: time="2026-01-20T07:02:16.330808673Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:16.332847 kubelet[2867]: E0120 07:02:16.332788 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:16.333451 containerd[1604]: time="2026-01-20T07:02:16.333392825Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.917155515s" Jan 20 07:02:16.333625 containerd[1604]: time="2026-01-20T07:02:16.333577327Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 07:02:16.340083 containerd[1604]: time="2026-01-20T07:02:16.340016876Z" level=info msg="CreateContainer within sandbox \"dd72f442227c245c59464d712b9f9af656c5c3e41e87e20215871a3862e1b766\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 07:02:16.356825 containerd[1604]: time="2026-01-20T07:02:16.356778873Z" level=info msg="Container 3a7fc47df8887a42393a621b9ac8741e1f09b52ef3ac54af18b268c9f4e46b56: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:02:16.367384 containerd[1604]: time="2026-01-20T07:02:16.367322042Z" level=info msg="CreateContainer within sandbox \"dd72f442227c245c59464d712b9f9af656c5c3e41e87e20215871a3862e1b766\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3a7fc47df8887a42393a621b9ac8741e1f09b52ef3ac54af18b268c9f4e46b56\"" Jan 20 07:02:16.368931 containerd[1604]: time="2026-01-20T07:02:16.368905902Z" level=info msg="StartContainer for \"3a7fc47df8887a42393a621b9ac8741e1f09b52ef3ac54af18b268c9f4e46b56\"" Jan 20 07:02:16.372364 containerd[1604]: time="2026-01-20T07:02:16.371962030Z" level=info msg="connecting to shim 3a7fc47df8887a42393a621b9ac8741e1f09b52ef3ac54af18b268c9f4e46b56" address="unix:///run/containerd/s/7d6545e757bf6e35865d41969b6bbbc6cd71a601ea33cdea7db23f892569cc4d" protocol=ttrpc version=3 Jan 20 07:02:16.483724 systemd[1]: Started cri-containerd-3a7fc47df8887a42393a621b9ac8741e1f09b52ef3ac54af18b268c9f4e46b56.scope - libcontainer container 3a7fc47df8887a42393a621b9ac8741e1f09b52ef3ac54af18b268c9f4e46b56. Jan 20 07:02:16.564000 audit: BPF prog-id=152 op=LOAD Jan 20 07:02:16.565000 audit: BPF prog-id=153 op=LOAD Jan 20 07:02:16.565000 audit[3178]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2931 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:16.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361376663343764663838383761343233393361363231623961633837 Jan 20 07:02:16.565000 audit: BPF prog-id=153 op=UNLOAD Jan 20 07:02:16.565000 audit[3178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:16.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361376663343764663838383761343233393361363231623961633837 Jan 20 07:02:16.565000 audit: BPF prog-id=154 op=LOAD Jan 20 07:02:16.565000 audit[3178]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2931 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:16.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361376663343764663838383761343233393361363231623961633837 Jan 20 07:02:16.566000 audit: BPF prog-id=155 op=LOAD Jan 20 07:02:16.566000 audit[3178]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2931 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:16.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361376663343764663838383761343233393361363231623961633837 Jan 20 07:02:16.566000 audit: BPF prog-id=155 op=UNLOAD Jan 20 07:02:16.566000 audit[3178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:16.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361376663343764663838383761343233393361363231623961633837 Jan 20 07:02:16.566000 audit: BPF prog-id=154 op=UNLOAD Jan 20 07:02:16.566000 audit[3178]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:16.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361376663343764663838383761343233393361363231623961633837 Jan 20 07:02:16.566000 audit: BPF prog-id=156 op=LOAD Jan 20 07:02:16.566000 audit[3178]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2931 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:16.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361376663343764663838383761343233393361363231623961633837 Jan 20 07:02:16.668963 containerd[1604]: time="2026-01-20T07:02:16.668615375Z" level=info msg="StartContainer for \"3a7fc47df8887a42393a621b9ac8741e1f09b52ef3ac54af18b268c9f4e46b56\" returns successfully" Jan 20 07:02:17.358019 kubelet[2867]: I0120 07:02:17.357874 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-9jxlt" podStartSLOduration=2.433266102 podStartE2EDuration="5.357825833s" podCreationTimestamp="2026-01-20 07:02:12 +0000 UTC" firstStartedPulling="2026-01-20 07:02:13.410835859 +0000 UTC m=+6.441509394" lastFinishedPulling="2026-01-20 07:02:16.33539559 +0000 UTC m=+9.366069125" observedRunningTime="2026-01-20 07:02:17.357320617 +0000 UTC m=+10.387994172" watchObservedRunningTime="2026-01-20 07:02:17.357825833 +0000 UTC m=+10.388499368" Jan 20 07:02:18.340708 kubelet[2867]: E0120 07:02:18.336603 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:18.354295 kubelet[2867]: E0120 07:02:18.354234 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:26.260100 sudo[1884]: pam_unix(sudo:session): session closed for user root Jan 20 07:02:26.262000 audit[1884]: USER_END pid=1884 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 07:02:26.263586 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 20 07:02:26.263762 kernel: audit: type=1106 audit(1768892546.262:524): pid=1884 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 07:02:26.274000 audit[1884]: CRED_DISP pid=1884 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 07:02:26.286365 kernel: audit: type=1104 audit(1768892546.274:525): pid=1884 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 07:02:26.304622 sshd[1883]: Connection closed by 20.161.92.111 port 49660 Jan 20 07:02:26.307448 sshd-session[1879]: pam_unix(sshd:session): session closed for user core Jan 20 07:02:26.325000 audit[1879]: USER_END pid=1879 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:02:26.338238 kernel: audit: type=1106 audit(1768892546.325:526): pid=1879 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:02:26.342967 systemd[1]: sshd@8-172.232.7.121:22-20.161.92.111:49660.service: Deactivated successfully. Jan 20 07:02:26.356455 kernel: audit: type=1104 audit(1768892546.325:527): pid=1879 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:02:26.325000 audit[1879]: CRED_DISP pid=1879 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:02:26.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.232.7.121:22-20.161.92.111:49660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:02:26.368119 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 07:02:26.368572 kernel: audit: type=1131 audit(1768892546.345:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.232.7.121:22-20.161.92.111:49660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:02:26.369748 systemd[1]: session-10.scope: Consumed 9.520s CPU time, 236.2M memory peak. Jan 20 07:02:26.375692 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Jan 20 07:02:26.386486 systemd-logind[1577]: Removed session 10. Jan 20 07:02:27.135000 audit[3260]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:27.141227 kernel: audit: type=1325 audit(1768892547.135:529): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:27.135000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffe8955890 a2=0 a3=7fffe895587c items=0 ppid=3020 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:27.154243 kernel: audit: type=1300 audit(1768892547.135:529): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffe8955890 a2=0 a3=7fffe895587c items=0 ppid=3020 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:27.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:27.162223 kernel: audit: type=1327 audit(1768892547.135:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:27.154000 audit[3260]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:27.169233 kernel: audit: type=1325 audit(1768892547.154:530): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:27.154000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe8955890 a2=0 a3=0 items=0 ppid=3020 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:27.180217 kernel: audit: type=1300 audit(1768892547.154:530): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe8955890 a2=0 a3=0 items=0 ppid=3020 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:27.154000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:27.197000 audit[3262]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:27.197000 audit[3262]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffddec754c0 a2=0 a3=7ffddec754ac items=0 ppid=3020 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:27.197000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:27.202000 audit[3262]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:27.202000 audit[3262]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffddec754c0 a2=0 a3=0 items=0 ppid=3020 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:27.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:30.612000 audit[3264]: NETFILTER_CFG table=filter:109 family=2 entries=16 op=nft_register_rule pid=3264 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:30.612000 audit[3264]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff09f58680 a2=0 a3=7fff09f5866c items=0 ppid=3020 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:30.612000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:30.617000 audit[3264]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3264 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:30.617000 audit[3264]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff09f58680 a2=0 a3=0 items=0 ppid=3020 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:30.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:30.637000 audit[3266]: NETFILTER_CFG table=filter:111 family=2 entries=17 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:30.637000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcb060e200 a2=0 a3=7ffcb060e1ec items=0 ppid=3020 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:30.637000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:30.641000 audit[3266]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:30.641000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb060e200 a2=0 a3=0 items=0 ppid=3020 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:30.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:31.663000 audit[3268]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:31.667405 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 20 07:02:31.667645 kernel: audit: type=1325 audit(1768892551.663:537): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:31.663000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffdd6d2d80 a2=0 a3=7fffdd6d2d6c items=0 ppid=3020 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:31.695208 kernel: audit: type=1300 audit(1768892551.663:537): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffdd6d2d80 a2=0 a3=7fffdd6d2d6c items=0 ppid=3020 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:31.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:31.701215 kernel: audit: type=1327 audit(1768892551.663:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:31.702000 audit[3268]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:31.702000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffdd6d2d80 a2=0 a3=0 items=0 ppid=3020 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:31.711882 kernel: audit: type=1325 audit(1768892551.702:538): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:31.712042 kernel: audit: type=1300 audit(1768892551.702:538): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffdd6d2d80 a2=0 a3=0 items=0 ppid=3020 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:31.702000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:31.721170 kernel: audit: type=1327 audit(1768892551.702:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:32.575317 systemd[1]: Created slice kubepods-besteffort-pode1af0a9f_54e7_407d_8551_28cd36126abc.slice - libcontainer container kubepods-besteffort-pode1af0a9f_54e7_407d_8551_28cd36126abc.slice. Jan 20 07:02:32.638000 audit[3270]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:32.646215 kernel: audit: type=1325 audit(1768892552.638:539): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:32.638000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffecca618c0 a2=0 a3=7ffecca618ac items=0 ppid=3020 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:32.658223 kernel: audit: type=1300 audit(1768892552.638:539): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffecca618c0 a2=0 a3=7ffecca618ac items=0 ppid=3020 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:32.638000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:32.657000 audit[3270]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:32.664032 kernel: audit: type=1327 audit(1768892552.638:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:32.664761 kernel: audit: type=1325 audit(1768892552.657:540): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:32.657000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffecca618c0 a2=0 a3=0 items=0 ppid=3020 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:32.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:32.748715 kubelet[2867]: I0120 07:02:32.748518 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w94rq\" (UniqueName: \"kubernetes.io/projected/e1af0a9f-54e7-407d-8551-28cd36126abc-kube-api-access-w94rq\") pod \"calico-typha-67cb858f94-hcxlg\" (UID: \"e1af0a9f-54e7-407d-8551-28cd36126abc\") " pod="calico-system/calico-typha-67cb858f94-hcxlg" Jan 20 07:02:32.751654 kubelet[2867]: I0120 07:02:32.750265 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1af0a9f-54e7-407d-8551-28cd36126abc-tigera-ca-bundle\") pod \"calico-typha-67cb858f94-hcxlg\" (UID: \"e1af0a9f-54e7-407d-8551-28cd36126abc\") " pod="calico-system/calico-typha-67cb858f94-hcxlg" Jan 20 07:02:32.751654 kubelet[2867]: I0120 07:02:32.750375 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e1af0a9f-54e7-407d-8551-28cd36126abc-typha-certs\") pod \"calico-typha-67cb858f94-hcxlg\" (UID: \"e1af0a9f-54e7-407d-8551-28cd36126abc\") " pod="calico-system/calico-typha-67cb858f94-hcxlg" Jan 20 07:02:32.792289 systemd[1]: Created slice kubepods-besteffort-pod1b6c8572_6a94_480a_a45d_4cb03f129465.slice - libcontainer container kubepods-besteffort-pod1b6c8572_6a94_480a_a45d_4cb03f129465.slice. Jan 20 07:02:32.853571 kubelet[2867]: I0120 07:02:32.852657 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b6c8572-6a94-480a-a45d-4cb03f129465-lib-modules\") pod \"calico-node-n2nks\" (UID: \"1b6c8572-6a94-480a-a45d-4cb03f129465\") " pod="calico-system/calico-node-n2nks" Jan 20 07:02:32.854286 kubelet[2867]: I0120 07:02:32.854236 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1b6c8572-6a94-480a-a45d-4cb03f129465-var-lib-calico\") pod \"calico-node-n2nks\" (UID: \"1b6c8572-6a94-480a-a45d-4cb03f129465\") " pod="calico-system/calico-node-n2nks" Jan 20 07:02:32.855480 kubelet[2867]: I0120 07:02:32.854388 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1b6c8572-6a94-480a-a45d-4cb03f129465-var-run-calico\") pod \"calico-node-n2nks\" (UID: \"1b6c8572-6a94-480a-a45d-4cb03f129465\") " pod="calico-system/calico-node-n2nks" Jan 20 07:02:32.855678 kubelet[2867]: I0120 07:02:32.855654 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmkr7\" (UniqueName: \"kubernetes.io/projected/1b6c8572-6a94-480a-a45d-4cb03f129465-kube-api-access-kmkr7\") pod \"calico-node-n2nks\" (UID: \"1b6c8572-6a94-480a-a45d-4cb03f129465\") " pod="calico-system/calico-node-n2nks" Jan 20 07:02:32.855933 kubelet[2867]: I0120 07:02:32.855911 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1b6c8572-6a94-480a-a45d-4cb03f129465-cni-bin-dir\") pod \"calico-node-n2nks\" (UID: \"1b6c8572-6a94-480a-a45d-4cb03f129465\") " pod="calico-system/calico-node-n2nks" Jan 20 07:02:32.858094 kubelet[2867]: I0120 07:02:32.857354 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1b6c8572-6a94-480a-a45d-4cb03f129465-cni-net-dir\") pod \"calico-node-n2nks\" (UID: \"1b6c8572-6a94-480a-a45d-4cb03f129465\") " pod="calico-system/calico-node-n2nks" Jan 20 07:02:32.858094 kubelet[2867]: I0120 07:02:32.857610 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1b6c8572-6a94-480a-a45d-4cb03f129465-node-certs\") pod \"calico-node-n2nks\" (UID: \"1b6c8572-6a94-480a-a45d-4cb03f129465\") " pod="calico-system/calico-node-n2nks" Jan 20 07:02:32.858094 kubelet[2867]: I0120 07:02:32.857666 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1b6c8572-6a94-480a-a45d-4cb03f129465-flexvol-driver-host\") pod \"calico-node-n2nks\" (UID: \"1b6c8572-6a94-480a-a45d-4cb03f129465\") " pod="calico-system/calico-node-n2nks" Jan 20 07:02:32.858094 kubelet[2867]: I0120 07:02:32.857701 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b6c8572-6a94-480a-a45d-4cb03f129465-tigera-ca-bundle\") pod \"calico-node-n2nks\" (UID: \"1b6c8572-6a94-480a-a45d-4cb03f129465\") " pod="calico-system/calico-node-n2nks" Jan 20 07:02:32.858094 kubelet[2867]: I0120 07:02:32.857723 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1b6c8572-6a94-480a-a45d-4cb03f129465-cni-log-dir\") pod \"calico-node-n2nks\" (UID: \"1b6c8572-6a94-480a-a45d-4cb03f129465\") " pod="calico-system/calico-node-n2nks" Jan 20 07:02:32.858609 kubelet[2867]: I0120 07:02:32.857835 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1b6c8572-6a94-480a-a45d-4cb03f129465-policysync\") pod \"calico-node-n2nks\" (UID: \"1b6c8572-6a94-480a-a45d-4cb03f129465\") " pod="calico-system/calico-node-n2nks" Jan 20 07:02:32.858609 kubelet[2867]: I0120 07:02:32.857871 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b6c8572-6a94-480a-a45d-4cb03f129465-xtables-lock\") pod \"calico-node-n2nks\" (UID: \"1b6c8572-6a94-480a-a45d-4cb03f129465\") " pod="calico-system/calico-node-n2nks" Jan 20 07:02:32.902723 kubelet[2867]: E0120 07:02:32.902110 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:32.906928 containerd[1604]: time="2026-01-20T07:02:32.906789775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67cb858f94-hcxlg,Uid:e1af0a9f-54e7-407d-8551-28cd36126abc,Namespace:calico-system,Attempt:0,}" Jan 20 07:02:32.983725 kubelet[2867]: E0120 07:02:32.983554 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:32.983725 kubelet[2867]: W0120 07:02:32.983605 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:32.983725 kubelet[2867]: E0120 07:02:32.983676 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.004149 kubelet[2867]: E0120 07:02:33.004061 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.004149 kubelet[2867]: W0120 07:02:33.004087 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.004149 kubelet[2867]: E0120 07:02:33.004111 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.099269 kubelet[2867]: E0120 07:02:33.099130 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:33.123812 containerd[1604]: time="2026-01-20T07:02:33.123522014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n2nks,Uid:1b6c8572-6a94-480a-a45d-4cb03f129465,Namespace:calico-system,Attempt:0,}" Jan 20 07:02:33.137146 kubelet[2867]: E0120 07:02:33.135557 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:02:33.159827 containerd[1604]: time="2026-01-20T07:02:33.159784005Z" level=info msg="connecting to shim 507c42024b0323ee9c1a241cd7e29262c2bb974ec2d3b38e53aca8e0cc23fa19" address="unix:///run/containerd/s/e6d59b866109fac5cba0fba33f4da99da270f520a6807b4aea19311a71411e52" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:02:33.170095 kubelet[2867]: E0120 07:02:33.170068 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.170476 kubelet[2867]: W0120 07:02:33.170453 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.170592 kubelet[2867]: E0120 07:02:33.170572 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.171457 kubelet[2867]: E0120 07:02:33.171440 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.171558 kubelet[2867]: W0120 07:02:33.171543 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.171702 kubelet[2867]: E0120 07:02:33.171665 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.172355 kubelet[2867]: E0120 07:02:33.172338 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.173078 kubelet[2867]: W0120 07:02:33.172909 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.173078 kubelet[2867]: E0120 07:02:33.172927 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.174044 kubelet[2867]: E0120 07:02:33.173964 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.174044 kubelet[2867]: W0120 07:02:33.173982 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.174044 kubelet[2867]: E0120 07:02:33.173996 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.174842 kubelet[2867]: E0120 07:02:33.174756 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.174842 kubelet[2867]: W0120 07:02:33.174771 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.174842 kubelet[2867]: E0120 07:02:33.174784 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.175720 kubelet[2867]: E0120 07:02:33.175665 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.176216 kubelet[2867]: W0120 07:02:33.175705 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.176216 kubelet[2867]: E0120 07:02:33.176057 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.177800 kubelet[2867]: E0120 07:02:33.177640 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.177800 kubelet[2867]: W0120 07:02:33.177656 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.177800 kubelet[2867]: E0120 07:02:33.177669 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.178175 kubelet[2867]: E0120 07:02:33.178160 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.178327 kubelet[2867]: W0120 07:02:33.178282 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.178756 kubelet[2867]: E0120 07:02:33.178595 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.179100 kubelet[2867]: E0120 07:02:33.178977 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.179100 kubelet[2867]: W0120 07:02:33.178993 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.179100 kubelet[2867]: E0120 07:02:33.179005 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.179738 kubelet[2867]: E0120 07:02:33.179616 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.179738 kubelet[2867]: W0120 07:02:33.179630 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.179738 kubelet[2867]: E0120 07:02:33.179643 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.180171 kubelet[2867]: E0120 07:02:33.180086 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.180171 kubelet[2867]: W0120 07:02:33.180102 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.180171 kubelet[2867]: E0120 07:02:33.180114 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.180951 kubelet[2867]: E0120 07:02:33.180899 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.180951 kubelet[2867]: W0120 07:02:33.180914 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.180951 kubelet[2867]: E0120 07:02:33.180927 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.181855 kubelet[2867]: E0120 07:02:33.181697 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.181855 kubelet[2867]: W0120 07:02:33.181715 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.181855 kubelet[2867]: E0120 07:02:33.181728 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.182555 kubelet[2867]: E0120 07:02:33.182510 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.182555 kubelet[2867]: W0120 07:02:33.182525 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.182555 kubelet[2867]: E0120 07:02:33.182538 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.183651 kubelet[2867]: E0120 07:02:33.183552 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.183802 kubelet[2867]: W0120 07:02:33.183749 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.183802 kubelet[2867]: E0120 07:02:33.183772 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.185891 kubelet[2867]: E0120 07:02:33.185694 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.185891 kubelet[2867]: W0120 07:02:33.185712 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.185891 kubelet[2867]: E0120 07:02:33.185727 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.185891 kubelet[2867]: I0120 07:02:33.185759 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dd9207e8-fe1e-43a2-ab22-bb4ac860e560-socket-dir\") pod \"csi-node-driver-w6jwn\" (UID: \"dd9207e8-fe1e-43a2-ab22-bb4ac860e560\") " pod="calico-system/csi-node-driver-w6jwn" Jan 20 07:02:33.186477 kubelet[2867]: E0120 07:02:33.186323 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.186477 kubelet[2867]: W0120 07:02:33.186339 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.186477 kubelet[2867]: E0120 07:02:33.186352 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.186477 kubelet[2867]: I0120 07:02:33.186372 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9207e8-fe1e-43a2-ab22-bb4ac860e560-kubelet-dir\") pod \"csi-node-driver-w6jwn\" (UID: \"dd9207e8-fe1e-43a2-ab22-bb4ac860e560\") " pod="calico-system/csi-node-driver-w6jwn" Jan 20 07:02:33.187119 kubelet[2867]: E0120 07:02:33.186925 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.187119 kubelet[2867]: W0120 07:02:33.186958 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.187119 kubelet[2867]: E0120 07:02:33.186971 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.187119 kubelet[2867]: I0120 07:02:33.186990 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dd9207e8-fe1e-43a2-ab22-bb4ac860e560-registration-dir\") pod \"csi-node-driver-w6jwn\" (UID: \"dd9207e8-fe1e-43a2-ab22-bb4ac860e560\") " pod="calico-system/csi-node-driver-w6jwn" Jan 20 07:02:33.187801 kubelet[2867]: E0120 07:02:33.187753 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.187801 kubelet[2867]: W0120 07:02:33.187769 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.187801 kubelet[2867]: E0120 07:02:33.187783 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.188727 kubelet[2867]: E0120 07:02:33.188677 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.188851 kubelet[2867]: W0120 07:02:33.188692 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.188851 kubelet[2867]: E0120 07:02:33.188823 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.189326 kubelet[2867]: E0120 07:02:33.189282 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.189326 kubelet[2867]: W0120 07:02:33.189297 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.189326 kubelet[2867]: E0120 07:02:33.189309 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.189998 kubelet[2867]: E0120 07:02:33.189951 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.189998 kubelet[2867]: W0120 07:02:33.189968 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.189998 kubelet[2867]: E0120 07:02:33.189981 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.190899 kubelet[2867]: E0120 07:02:33.190855 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.190899 kubelet[2867]: W0120 07:02:33.190870 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.190899 kubelet[2867]: E0120 07:02:33.190882 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.191728 kubelet[2867]: E0120 07:02:33.191708 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.191913 kubelet[2867]: W0120 07:02:33.191809 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.191913 kubelet[2867]: E0120 07:02:33.191828 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.192354 kubelet[2867]: E0120 07:02:33.192311 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.192354 kubelet[2867]: W0120 07:02:33.192325 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.192354 kubelet[2867]: E0120 07:02:33.192338 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.193092 kubelet[2867]: E0120 07:02:33.193015 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.193092 kubelet[2867]: W0120 07:02:33.193028 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.193092 kubelet[2867]: E0120 07:02:33.193039 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.193549 kubelet[2867]: E0120 07:02:33.193469 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.193549 kubelet[2867]: W0120 07:02:33.193484 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.193549 kubelet[2867]: E0120 07:02:33.193496 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.194197 kubelet[2867]: E0120 07:02:33.194111 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.194197 kubelet[2867]: W0120 07:02:33.194125 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.194197 kubelet[2867]: E0120 07:02:33.194138 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.194810 kubelet[2867]: E0120 07:02:33.194723 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.194810 kubelet[2867]: W0120 07:02:33.194737 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.194810 kubelet[2867]: E0120 07:02:33.194750 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.302821 kubelet[2867]: E0120 07:02:33.302678 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.303495 kubelet[2867]: W0120 07:02:33.302901 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.303495 kubelet[2867]: E0120 07:02:33.302931 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.304121 kubelet[2867]: I0120 07:02:33.303619 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dd9207e8-fe1e-43a2-ab22-bb4ac860e560-varrun\") pod \"csi-node-driver-w6jwn\" (UID: \"dd9207e8-fe1e-43a2-ab22-bb4ac860e560\") " pod="calico-system/csi-node-driver-w6jwn" Jan 20 07:02:33.305638 kubelet[2867]: E0120 07:02:33.305587 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.305921 kubelet[2867]: W0120 07:02:33.305904 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.306562 kubelet[2867]: E0120 07:02:33.306107 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.307294 kubelet[2867]: E0120 07:02:33.307278 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.307547 kubelet[2867]: W0120 07:02:33.307402 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.307547 kubelet[2867]: E0120 07:02:33.307527 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.308610 kubelet[2867]: E0120 07:02:33.308533 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.308610 kubelet[2867]: W0120 07:02:33.308549 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.308610 kubelet[2867]: E0120 07:02:33.308562 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.310150 kubelet[2867]: E0120 07:02:33.310093 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.310150 kubelet[2867]: W0120 07:02:33.310108 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.310150 kubelet[2867]: E0120 07:02:33.310119 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.310749 kubelet[2867]: I0120 07:02:33.310280 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb7xz\" (UniqueName: \"kubernetes.io/projected/dd9207e8-fe1e-43a2-ab22-bb4ac860e560-kube-api-access-tb7xz\") pod \"csi-node-driver-w6jwn\" (UID: \"dd9207e8-fe1e-43a2-ab22-bb4ac860e560\") " pod="calico-system/csi-node-driver-w6jwn" Jan 20 07:02:33.311391 kubelet[2867]: E0120 07:02:33.311284 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.311391 kubelet[2867]: W0120 07:02:33.311298 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.311391 kubelet[2867]: E0120 07:02:33.311309 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.312076 kubelet[2867]: E0120 07:02:33.311861 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.312076 kubelet[2867]: W0120 07:02:33.312033 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.312076 kubelet[2867]: E0120 07:02:33.312056 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.313849 kubelet[2867]: E0120 07:02:33.313728 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.313849 kubelet[2867]: W0120 07:02:33.313743 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.313849 kubelet[2867]: E0120 07:02:33.313753 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.315049 kubelet[2867]: E0120 07:02:33.314960 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.315049 kubelet[2867]: W0120 07:02:33.314976 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.315049 kubelet[2867]: E0120 07:02:33.314987 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.316116 kubelet[2867]: E0120 07:02:33.316102 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.316254 kubelet[2867]: W0120 07:02:33.316240 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.316378 kubelet[2867]: E0120 07:02:33.316350 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.317626 kubelet[2867]: E0120 07:02:33.317459 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.317626 kubelet[2867]: W0120 07:02:33.317473 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.317626 kubelet[2867]: E0120 07:02:33.317484 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.320631 kubelet[2867]: E0120 07:02:33.320584 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.320631 kubelet[2867]: W0120 07:02:33.320622 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.320751 kubelet[2867]: E0120 07:02:33.320657 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.324580 kubelet[2867]: E0120 07:02:33.324343 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.324580 kubelet[2867]: W0120 07:02:33.324367 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.324580 kubelet[2867]: E0120 07:02:33.324573 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.326766 kubelet[2867]: E0120 07:02:33.326734 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.326766 kubelet[2867]: W0120 07:02:33.326756 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.326766 kubelet[2867]: E0120 07:02:33.326769 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.331341 kubelet[2867]: E0120 07:02:33.328404 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.331341 kubelet[2867]: W0120 07:02:33.328420 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.331341 kubelet[2867]: E0120 07:02:33.328631 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.331341 kubelet[2867]: E0120 07:02:33.330343 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.331341 kubelet[2867]: W0120 07:02:33.330354 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.331341 kubelet[2867]: E0120 07:02:33.330365 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.335577 kubelet[2867]: E0120 07:02:33.335509 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.335577 kubelet[2867]: W0120 07:02:33.335550 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.335773 kubelet[2867]: E0120 07:02:33.335586 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.338837 systemd[1]: Started cri-containerd-507c42024b0323ee9c1a241cd7e29262c2bb974ec2d3b38e53aca8e0cc23fa19.scope - libcontainer container 507c42024b0323ee9c1a241cd7e29262c2bb974ec2d3b38e53aca8e0cc23fa19. Jan 20 07:02:33.340021 kubelet[2867]: E0120 07:02:33.339035 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.340021 kubelet[2867]: W0120 07:02:33.339049 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.340021 kubelet[2867]: E0120 07:02:33.339065 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.341251 kubelet[2867]: E0120 07:02:33.340469 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.361811 kubelet[2867]: W0120 07:02:33.340506 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.361811 kubelet[2867]: E0120 07:02:33.344667 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.361811 kubelet[2867]: E0120 07:02:33.345703 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.361811 kubelet[2867]: W0120 07:02:33.345717 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.361811 kubelet[2867]: E0120 07:02:33.345739 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.361811 kubelet[2867]: E0120 07:02:33.349667 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.361811 kubelet[2867]: W0120 07:02:33.349678 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.361811 kubelet[2867]: E0120 07:02:33.349690 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.367090 containerd[1604]: time="2026-01-20T07:02:33.366876616Z" level=info msg="connecting to shim 6c175be8e3499886c72c3d7549f5bd28a88f0161d01746298e264d8e61f84222" address="unix:///run/containerd/s/9914c6440b77288d351688d1a2633f10b936fe692d208d104a87395efb9a8062" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:02:33.440739 kubelet[2867]: E0120 07:02:33.439116 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.440739 kubelet[2867]: W0120 07:02:33.439201 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.440739 kubelet[2867]: E0120 07:02:33.439231 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.440739 kubelet[2867]: E0120 07:02:33.439879 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.440739 kubelet[2867]: W0120 07:02:33.439896 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.440739 kubelet[2867]: E0120 07:02:33.439920 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.440739 kubelet[2867]: E0120 07:02:33.440389 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.440739 kubelet[2867]: W0120 07:02:33.440411 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.440739 kubelet[2867]: E0120 07:02:33.440424 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.441123 kubelet[2867]: E0120 07:02:33.440805 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.441123 kubelet[2867]: W0120 07:02:33.440815 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.441123 kubelet[2867]: E0120 07:02:33.440825 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.441291 kubelet[2867]: E0120 07:02:33.441262 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.441346 kubelet[2867]: W0120 07:02:33.441294 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.441346 kubelet[2867]: E0120 07:02:33.441304 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.443572 kubelet[2867]: E0120 07:02:33.443527 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.443572 kubelet[2867]: W0120 07:02:33.443547 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.443572 kubelet[2867]: E0120 07:02:33.443560 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.444216 kubelet[2867]: E0120 07:02:33.444157 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.444216 kubelet[2867]: W0120 07:02:33.444208 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.444315 kubelet[2867]: E0120 07:02:33.444222 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.448209 kubelet[2867]: E0120 07:02:33.446133 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.448209 kubelet[2867]: W0120 07:02:33.446159 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.448209 kubelet[2867]: E0120 07:02:33.446172 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.450201 kubelet[2867]: E0120 07:02:33.450055 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.450201 kubelet[2867]: W0120 07:02:33.450074 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.450201 kubelet[2867]: E0120 07:02:33.450111 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.454268 kubelet[2867]: E0120 07:02:33.454239 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.454268 kubelet[2867]: W0120 07:02:33.454260 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.454354 kubelet[2867]: E0120 07:02:33.454274 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.455700 systemd[1]: Started cri-containerd-6c175be8e3499886c72c3d7549f5bd28a88f0161d01746298e264d8e61f84222.scope - libcontainer container 6c175be8e3499886c72c3d7549f5bd28a88f0161d01746298e264d8e61f84222. Jan 20 07:02:33.475003 kubelet[2867]: E0120 07:02:33.474917 2867 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 07:02:33.475003 kubelet[2867]: W0120 07:02:33.474938 2867 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 07:02:33.475003 kubelet[2867]: E0120 07:02:33.474959 2867 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 07:02:33.556000 audit: BPF prog-id=157 op=LOAD Jan 20 07:02:33.557000 audit: BPF prog-id=158 op=LOAD Jan 20 07:02:33.557000 audit[3390]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3375 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663313735626538653334393938383663373263336437353439663562 Jan 20 07:02:33.557000 audit: BPF prog-id=158 op=UNLOAD Jan 20 07:02:33.557000 audit[3390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3375 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663313735626538653334393938383663373263336437353439663562 Jan 20 07:02:33.558000 audit: BPF prog-id=159 op=LOAD Jan 20 07:02:33.558000 audit[3390]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3375 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663313735626538653334393938383663373263336437353439663562 Jan 20 07:02:33.558000 audit: BPF prog-id=160 op=LOAD Jan 20 07:02:33.558000 audit[3390]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3375 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663313735626538653334393938383663373263336437353439663562 Jan 20 07:02:33.558000 audit: BPF prog-id=160 op=UNLOAD Jan 20 07:02:33.558000 audit[3390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3375 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663313735626538653334393938383663373263336437353439663562 Jan 20 07:02:33.558000 audit: BPF prog-id=159 op=UNLOAD Jan 20 07:02:33.558000 audit[3390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3375 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663313735626538653334393938383663373263336437353439663562 Jan 20 07:02:33.558000 audit: BPF prog-id=161 op=LOAD Jan 20 07:02:33.558000 audit[3390]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3375 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663313735626538653334393938383663373263336437353439663562 Jan 20 07:02:33.625000 audit: BPF prog-id=162 op=LOAD Jan 20 07:02:33.626000 audit: BPF prog-id=163 op=LOAD Jan 20 07:02:33.626000 audit[3334]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3286 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376334323032346230333233656539633161323431636437653239 Jan 20 07:02:33.626000 audit: BPF prog-id=163 op=UNLOAD Jan 20 07:02:33.626000 audit[3334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376334323032346230333233656539633161323431636437653239 Jan 20 07:02:33.626000 audit: BPF prog-id=164 op=LOAD Jan 20 07:02:33.626000 audit[3334]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3286 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376334323032346230333233656539633161323431636437653239 Jan 20 07:02:33.626000 audit: BPF prog-id=165 op=LOAD Jan 20 07:02:33.626000 audit[3334]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3286 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376334323032346230333233656539633161323431636437653239 Jan 20 07:02:33.626000 audit: BPF prog-id=165 op=UNLOAD Jan 20 07:02:33.626000 audit[3334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376334323032346230333233656539633161323431636437653239 Jan 20 07:02:33.626000 audit: BPF prog-id=164 op=UNLOAD Jan 20 07:02:33.626000 audit[3334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376334323032346230333233656539633161323431636437653239 Jan 20 07:02:33.627000 audit: BPF prog-id=166 op=LOAD Jan 20 07:02:33.627000 audit[3334]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3286 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:33.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530376334323032346230333233656539633161323431636437653239 Jan 20 07:02:33.696991 containerd[1604]: time="2026-01-20T07:02:33.696452072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n2nks,Uid:1b6c8572-6a94-480a-a45d-4cb03f129465,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c175be8e3499886c72c3d7549f5bd28a88f0161d01746298e264d8e61f84222\"" Jan 20 07:02:33.699699 kubelet[2867]: E0120 07:02:33.699664 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:33.702695 containerd[1604]: time="2026-01-20T07:02:33.702665719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 07:02:33.712294 containerd[1604]: time="2026-01-20T07:02:33.712128161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67cb858f94-hcxlg,Uid:e1af0a9f-54e7-407d-8551-28cd36126abc,Namespace:calico-system,Attempt:0,} returns sandbox id \"507c42024b0323ee9c1a241cd7e29262c2bb974ec2d3b38e53aca8e0cc23fa19\"" Jan 20 07:02:33.713514 kubelet[2867]: E0120 07:02:33.713486 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:34.388956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2828128065.mount: Deactivated successfully. Jan 20 07:02:34.514733 containerd[1604]: time="2026-01-20T07:02:34.514668703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:34.516262 containerd[1604]: time="2026-01-20T07:02:34.516225909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 20 07:02:34.516658 containerd[1604]: time="2026-01-20T07:02:34.516625111Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:34.521029 containerd[1604]: time="2026-01-20T07:02:34.520349667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:34.521029 containerd[1604]: time="2026-01-20T07:02:34.520887799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 818.16912ms" Jan 20 07:02:34.521029 containerd[1604]: time="2026-01-20T07:02:34.520918769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 07:02:34.522344 containerd[1604]: time="2026-01-20T07:02:34.522310395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 07:02:34.527707 containerd[1604]: time="2026-01-20T07:02:34.527660917Z" level=info msg="CreateContainer within sandbox \"6c175be8e3499886c72c3d7549f5bd28a88f0161d01746298e264d8e61f84222\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 07:02:34.542284 containerd[1604]: time="2026-01-20T07:02:34.541405305Z" level=info msg="Container 5f366c9e4430128938b944c413168bd7ddca54325d62b6683ed6a29c7756e8f3: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:02:34.547859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1744385835.mount: Deactivated successfully. Jan 20 07:02:34.560298 containerd[1604]: time="2026-01-20T07:02:34.560225934Z" level=info msg="CreateContainer within sandbox \"6c175be8e3499886c72c3d7549f5bd28a88f0161d01746298e264d8e61f84222\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5f366c9e4430128938b944c413168bd7ddca54325d62b6683ed6a29c7756e8f3\"" Jan 20 07:02:34.561011 containerd[1604]: time="2026-01-20T07:02:34.560986407Z" level=info msg="StartContainer for \"5f366c9e4430128938b944c413168bd7ddca54325d62b6683ed6a29c7756e8f3\"" Jan 20 07:02:34.566782 containerd[1604]: time="2026-01-20T07:02:34.566720981Z" level=info msg="connecting to shim 5f366c9e4430128938b944c413168bd7ddca54325d62b6683ed6a29c7756e8f3" address="unix:///run/containerd/s/9914c6440b77288d351688d1a2633f10b936fe692d208d104a87395efb9a8062" protocol=ttrpc version=3 Jan 20 07:02:34.603613 systemd[1]: Started cri-containerd-5f366c9e4430128938b944c413168bd7ddca54325d62b6683ed6a29c7756e8f3.scope - libcontainer container 5f366c9e4430128938b944c413168bd7ddca54325d62b6683ed6a29c7756e8f3. Jan 20 07:02:34.724000 audit: BPF prog-id=167 op=LOAD Jan 20 07:02:34.724000 audit[3448]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3375 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:34.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566333636633965343433303132383933386239343463343133313638 Jan 20 07:02:34.724000 audit: BPF prog-id=168 op=LOAD Jan 20 07:02:34.724000 audit[3448]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3375 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:34.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566333636633965343433303132383933386239343463343133313638 Jan 20 07:02:34.724000 audit: BPF prog-id=168 op=UNLOAD Jan 20 07:02:34.724000 audit[3448]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3375 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:34.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566333636633965343433303132383933386239343463343133313638 Jan 20 07:02:34.724000 audit: BPF prog-id=167 op=UNLOAD Jan 20 07:02:34.724000 audit[3448]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3375 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:34.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566333636633965343433303132383933386239343463343133313638 Jan 20 07:02:34.724000 audit: BPF prog-id=169 op=LOAD Jan 20 07:02:34.724000 audit[3448]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3375 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:34.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566333636633965343433303132383933386239343463343133313638 Jan 20 07:02:34.823484 containerd[1604]: time="2026-01-20T07:02:34.823140748Z" level=info msg="StartContainer for \"5f366c9e4430128938b944c413168bd7ddca54325d62b6683ed6a29c7756e8f3\" returns successfully" Jan 20 07:02:34.838552 systemd[1]: cri-containerd-5f366c9e4430128938b944c413168bd7ddca54325d62b6683ed6a29c7756e8f3.scope: Deactivated successfully. Jan 20 07:02:34.841000 audit: BPF prog-id=169 op=UNLOAD Jan 20 07:02:34.844225 containerd[1604]: time="2026-01-20T07:02:34.844157236Z" level=info msg="received container exit event container_id:\"5f366c9e4430128938b944c413168bd7ddca54325d62b6683ed6a29c7756e8f3\" id:\"5f366c9e4430128938b944c413168bd7ddca54325d62b6683ed6a29c7756e8f3\" pid:3460 exited_at:{seconds:1768892554 nanos:842033527}" Jan 20 07:02:35.248080 kubelet[2867]: E0120 07:02:35.247283 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:02:35.437418 kubelet[2867]: E0120 07:02:35.437381 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:37.269229 kubelet[2867]: E0120 07:02:37.268615 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:02:37.415214 containerd[1604]: time="2026-01-20T07:02:37.414932153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:37.417208 containerd[1604]: time="2026-01-20T07:02:37.417153211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33736633" Jan 20 07:02:37.419034 containerd[1604]: time="2026-01-20T07:02:37.418398706Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:37.421419 containerd[1604]: time="2026-01-20T07:02:37.421367086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:37.422440 containerd[1604]: time="2026-01-20T07:02:37.422402970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.900058965s" Jan 20 07:02:37.422440 containerd[1604]: time="2026-01-20T07:02:37.422437780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 07:02:37.425714 containerd[1604]: time="2026-01-20T07:02:37.425682031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 07:02:37.470796 containerd[1604]: time="2026-01-20T07:02:37.470728002Z" level=info msg="CreateContainer within sandbox \"507c42024b0323ee9c1a241cd7e29262c2bb974ec2d3b38e53aca8e0cc23fa19\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 07:02:37.483444 containerd[1604]: time="2026-01-20T07:02:37.483409856Z" level=info msg="Container 0d98a30a365dd40163baadb735d246672da4d8dd0249c411b1b9372f0d1ad50c: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:02:37.492140 containerd[1604]: time="2026-01-20T07:02:37.492098497Z" level=info msg="CreateContainer within sandbox \"507c42024b0323ee9c1a241cd7e29262c2bb974ec2d3b38e53aca8e0cc23fa19\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0d98a30a365dd40163baadb735d246672da4d8dd0249c411b1b9372f0d1ad50c\"" Jan 20 07:02:37.494558 containerd[1604]: time="2026-01-20T07:02:37.493694753Z" level=info msg="StartContainer for \"0d98a30a365dd40163baadb735d246672da4d8dd0249c411b1b9372f0d1ad50c\"" Jan 20 07:02:37.496814 containerd[1604]: time="2026-01-20T07:02:37.496790094Z" level=info msg="connecting to shim 0d98a30a365dd40163baadb735d246672da4d8dd0249c411b1b9372f0d1ad50c" address="unix:///run/containerd/s/e6d59b866109fac5cba0fba33f4da99da270f520a6807b4aea19311a71411e52" protocol=ttrpc version=3 Jan 20 07:02:37.596598 systemd[1]: Started cri-containerd-0d98a30a365dd40163baadb735d246672da4d8dd0249c411b1b9372f0d1ad50c.scope - libcontainer container 0d98a30a365dd40163baadb735d246672da4d8dd0249c411b1b9372f0d1ad50c. Jan 20 07:02:37.705000 audit: BPF prog-id=170 op=LOAD Jan 20 07:02:37.707356 kernel: kauditd_printk_skb: 62 callbacks suppressed Jan 20 07:02:37.707479 kernel: audit: type=1334 audit(1768892557.705:563): prog-id=170 op=LOAD Jan 20 07:02:37.706000 audit: BPF prog-id=171 op=LOAD Jan 20 07:02:37.713099 kernel: audit: type=1334 audit(1768892557.706:564): prog-id=171 op=LOAD Jan 20 07:02:37.713911 kernel: audit: type=1300 audit(1768892557.706:564): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3286 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:37.706000 audit[3504]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3286 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:37.731208 kernel: audit: type=1327 audit(1768892557.706:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064393861333061333635646434303136336261616462373335643234 Jan 20 07:02:37.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064393861333061333635646434303136336261616462373335643234 Jan 20 07:02:37.706000 audit: BPF prog-id=171 op=UNLOAD Jan 20 07:02:37.742960 kernel: audit: type=1334 audit(1768892557.706:565): prog-id=171 op=UNLOAD Jan 20 07:02:37.743063 kernel: audit: type=1300 audit(1768892557.706:565): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:37.706000 audit[3504]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:37.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064393861333061333635646434303136336261616462373335643234 Jan 20 07:02:37.754865 kernel: audit: type=1327 audit(1768892557.706:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064393861333061333635646434303136336261616462373335643234 Jan 20 07:02:37.754919 kernel: audit: type=1334 audit(1768892557.706:566): prog-id=172 op=LOAD Jan 20 07:02:37.706000 audit: BPF prog-id=172 op=LOAD Jan 20 07:02:37.763672 kernel: audit: type=1300 audit(1768892557.706:566): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3286 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:37.706000 audit[3504]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3286 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:37.772399 kernel: audit: type=1327 audit(1768892557.706:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064393861333061333635646434303136336261616462373335643234 Jan 20 07:02:37.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064393861333061333635646434303136336261616462373335643234 Jan 20 07:02:37.707000 audit: BPF prog-id=173 op=LOAD Jan 20 07:02:37.707000 audit[3504]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3286 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:37.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064393861333061333635646434303136336261616462373335643234 Jan 20 07:02:37.707000 audit: BPF prog-id=173 op=UNLOAD Jan 20 07:02:37.707000 audit[3504]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:37.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064393861333061333635646434303136336261616462373335643234 Jan 20 07:02:37.707000 audit: BPF prog-id=172 op=UNLOAD Jan 20 07:02:37.707000 audit[3504]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3286 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:37.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064393861333061333635646434303136336261616462373335643234 Jan 20 07:02:37.707000 audit: BPF prog-id=174 op=LOAD Jan 20 07:02:37.707000 audit[3504]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3286 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:37.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064393861333061333635646434303136336261616462373335643234 Jan 20 07:02:37.901087 containerd[1604]: time="2026-01-20T07:02:37.900720877Z" level=info msg="StartContainer for \"0d98a30a365dd40163baadb735d246672da4d8dd0249c411b1b9372f0d1ad50c\" returns successfully" Jan 20 07:02:38.479721 kubelet[2867]: E0120 07:02:38.479666 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:38.666141 kubelet[2867]: I0120 07:02:38.665934 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67cb858f94-hcxlg" podStartSLOduration=2.9559172289999998 podStartE2EDuration="6.665813574s" podCreationTimestamp="2026-01-20 07:02:32 +0000 UTC" firstStartedPulling="2026-01-20 07:02:33.714211191 +0000 UTC m=+26.744884726" lastFinishedPulling="2026-01-20 07:02:37.424107526 +0000 UTC m=+30.454781071" observedRunningTime="2026-01-20 07:02:38.66466934 +0000 UTC m=+31.695342885" watchObservedRunningTime="2026-01-20 07:02:38.665813574 +0000 UTC m=+31.696487109" Jan 20 07:02:39.256247 kubelet[2867]: E0120 07:02:39.247333 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:02:39.483395 kubelet[2867]: I0120 07:02:39.483341 2867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 07:02:39.484565 kubelet[2867]: E0120 07:02:39.484544 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:41.248589 kubelet[2867]: E0120 07:02:41.248331 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:02:42.841135 containerd[1604]: time="2026-01-20T07:02:42.841022609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:42.842856 containerd[1604]: time="2026-01-20T07:02:42.842796844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 20 07:02:42.844225 containerd[1604]: time="2026-01-20T07:02:42.843603396Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:42.848681 containerd[1604]: time="2026-01-20T07:02:42.848029329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:42.849951 containerd[1604]: time="2026-01-20T07:02:42.849902264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.42362201s" Jan 20 07:02:42.850123 containerd[1604]: time="2026-01-20T07:02:42.850098394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 07:02:42.858868 containerd[1604]: time="2026-01-20T07:02:42.858810517Z" level=info msg="CreateContainer within sandbox \"6c175be8e3499886c72c3d7549f5bd28a88f0161d01746298e264d8e61f84222\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 07:02:42.919897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215993379.mount: Deactivated successfully. Jan 20 07:02:42.923278 containerd[1604]: time="2026-01-20T07:02:42.921705397Z" level=info msg="Container c507308078f5b632470c83f2646875616bdb2f5b7805769e45122518a5d660b7: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:02:42.937071 containerd[1604]: time="2026-01-20T07:02:42.937000349Z" level=info msg="CreateContainer within sandbox \"6c175be8e3499886c72c3d7549f5bd28a88f0161d01746298e264d8e61f84222\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c507308078f5b632470c83f2646875616bdb2f5b7805769e45122518a5d660b7\"" Jan 20 07:02:42.938126 containerd[1604]: time="2026-01-20T07:02:42.938093253Z" level=info msg="StartContainer for \"c507308078f5b632470c83f2646875616bdb2f5b7805769e45122518a5d660b7\"" Jan 20 07:02:42.941656 containerd[1604]: time="2026-01-20T07:02:42.941359631Z" level=info msg="connecting to shim c507308078f5b632470c83f2646875616bdb2f5b7805769e45122518a5d660b7" address="unix:///run/containerd/s/9914c6440b77288d351688d1a2633f10b936fe692d208d104a87395efb9a8062" protocol=ttrpc version=3 Jan 20 07:02:43.069571 systemd[1]: Started cri-containerd-c507308078f5b632470c83f2646875616bdb2f5b7805769e45122518a5d660b7.scope - libcontainer container c507308078f5b632470c83f2646875616bdb2f5b7805769e45122518a5d660b7. Jan 20 07:02:43.223830 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 07:02:43.224037 kernel: audit: type=1334 audit(1768892563.221:571): prog-id=175 op=LOAD Jan 20 07:02:43.221000 audit: BPF prog-id=175 op=LOAD Jan 20 07:02:43.221000 audit[3548]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3375 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:43.229467 kernel: audit: type=1300 audit(1768892563.221:571): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3375 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:43.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335303733303830373866356236333234373063383366323634363837 Jan 20 07:02:43.239792 kernel: audit: type=1327 audit(1768892563.221:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335303733303830373866356236333234373063383366323634363837 Jan 20 07:02:43.221000 audit: BPF prog-id=176 op=LOAD Jan 20 07:02:43.247245 kubelet[2867]: E0120 07:02:43.245671 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:02:43.248047 kernel: audit: type=1334 audit(1768892563.221:572): prog-id=176 op=LOAD Jan 20 07:02:43.221000 audit[3548]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3375 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:43.263404 kernel: audit: type=1300 audit(1768892563.221:572): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3375 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:43.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335303733303830373866356236333234373063383366323634363837 Jan 20 07:02:43.274492 kernel: audit: type=1327 audit(1768892563.221:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335303733303830373866356236333234373063383366323634363837 Jan 20 07:02:43.221000 audit: BPF prog-id=176 op=UNLOAD Jan 20 07:02:43.279473 kernel: audit: type=1334 audit(1768892563.221:573): prog-id=176 op=UNLOAD Jan 20 07:02:43.289534 kernel: audit: type=1300 audit(1768892563.221:573): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3375 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:43.221000 audit[3548]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3375 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:43.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335303733303830373866356236333234373063383366323634363837 Jan 20 07:02:43.310712 kernel: audit: type=1327 audit(1768892563.221:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335303733303830373866356236333234373063383366323634363837 Jan 20 07:02:43.311135 kernel: audit: type=1334 audit(1768892563.221:574): prog-id=175 op=UNLOAD Jan 20 07:02:43.221000 audit: BPF prog-id=175 op=UNLOAD Jan 20 07:02:43.221000 audit[3548]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3375 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:43.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335303733303830373866356236333234373063383366323634363837 Jan 20 07:02:43.221000 audit: BPF prog-id=177 op=LOAD Jan 20 07:02:43.221000 audit[3548]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3375 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:43.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335303733303830373866356236333234373063383366323634363837 Jan 20 07:02:43.356212 containerd[1604]: time="2026-01-20T07:02:43.356137224Z" level=info msg="StartContainer for \"c507308078f5b632470c83f2646875616bdb2f5b7805769e45122518a5d660b7\" returns successfully" Jan 20 07:02:43.578236 kubelet[2867]: E0120 07:02:43.574509 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:44.577271 kubelet[2867]: E0120 07:02:44.577142 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:45.253990 kubelet[2867]: E0120 07:02:45.253877 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:02:46.609296 systemd[1]: cri-containerd-c507308078f5b632470c83f2646875616bdb2f5b7805769e45122518a5d660b7.scope: Deactivated successfully. Jan 20 07:02:46.611824 systemd[1]: cri-containerd-c507308078f5b632470c83f2646875616bdb2f5b7805769e45122518a5d660b7.scope: Consumed 3.482s CPU time, 193.6M memory peak, 171.3M written to disk. Jan 20 07:02:46.613000 audit: BPF prog-id=177 op=UNLOAD Jan 20 07:02:46.618150 containerd[1604]: time="2026-01-20T07:02:46.617831772Z" level=info msg="received container exit event container_id:\"c507308078f5b632470c83f2646875616bdb2f5b7805769e45122518a5d660b7\" id:\"c507308078f5b632470c83f2646875616bdb2f5b7805769e45122518a5d660b7\" pid:3561 exited_at:{seconds:1768892566 nanos:613116682}" Jan 20 07:02:46.664310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c507308078f5b632470c83f2646875616bdb2f5b7805769e45122518a5d660b7-rootfs.mount: Deactivated successfully. Jan 20 07:02:46.694329 kubelet[2867]: I0120 07:02:46.692809 2867 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 07:02:46.771781 systemd[1]: Created slice kubepods-burstable-pod2244b5fd_5821_4058_8f7d_13e543d8b404.slice - libcontainer container kubepods-burstable-pod2244b5fd_5821_4058_8f7d_13e543d8b404.slice. Jan 20 07:02:46.791124 systemd[1]: Created slice kubepods-burstable-pode65373ef_4973_4aa5_9425_ede746ebd364.slice - libcontainer container kubepods-burstable-pode65373ef_4973_4aa5_9425_ede746ebd364.slice. Jan 20 07:02:46.796258 kubelet[2867]: I0120 07:02:46.795446 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/add8a880-515a-44f3-9fed-8077d26ba5b6-calico-apiserver-certs\") pod \"calico-apiserver-6c495f47-pkkt5\" (UID: \"add8a880-515a-44f3-9fed-8077d26ba5b6\") " pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" Jan 20 07:02:46.796258 kubelet[2867]: I0120 07:02:46.795497 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/445ffd04-d4f6-441f-8b52-ca096310b51a-whisker-ca-bundle\") pod \"whisker-8c9fb997c-5zc2h\" (UID: \"445ffd04-d4f6-441f-8b52-ca096310b51a\") " pod="calico-system/whisker-8c9fb997c-5zc2h" Jan 20 07:02:46.796258 kubelet[2867]: I0120 07:02:46.795517 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/18f52096-e17f-46d5-a51d-1ae5ca49fd14-calico-apiserver-certs\") pod \"calico-apiserver-6c495f47-n5kdv\" (UID: \"18f52096-e17f-46d5-a51d-1ae5ca49fd14\") " pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" Jan 20 07:02:46.796258 kubelet[2867]: I0120 07:02:46.795540 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/445ffd04-d4f6-441f-8b52-ca096310b51a-whisker-backend-key-pair\") pod \"whisker-8c9fb997c-5zc2h\" (UID: \"445ffd04-d4f6-441f-8b52-ca096310b51a\") " pod="calico-system/whisker-8c9fb997c-5zc2h" Jan 20 07:02:46.796258 kubelet[2867]: I0120 07:02:46.795561 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zm5g\" (UniqueName: \"kubernetes.io/projected/2244b5fd-5821-4058-8f7d-13e543d8b404-kube-api-access-9zm5g\") pod \"coredns-674b8bbfcf-2dszx\" (UID: \"2244b5fd-5821-4058-8f7d-13e543d8b404\") " pod="kube-system/coredns-674b8bbfcf-2dszx" Jan 20 07:02:46.796529 kubelet[2867]: I0120 07:02:46.795588 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5srsz\" (UniqueName: \"kubernetes.io/projected/e65373ef-4973-4aa5-9425-ede746ebd364-kube-api-access-5srsz\") pod \"coredns-674b8bbfcf-8vm6n\" (UID: \"e65373ef-4973-4aa5-9425-ede746ebd364\") " pod="kube-system/coredns-674b8bbfcf-8vm6n" Jan 20 07:02:46.796529 kubelet[2867]: I0120 07:02:46.795614 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2244b5fd-5821-4058-8f7d-13e543d8b404-config-volume\") pod \"coredns-674b8bbfcf-2dszx\" (UID: \"2244b5fd-5821-4058-8f7d-13e543d8b404\") " pod="kube-system/coredns-674b8bbfcf-2dszx" Jan 20 07:02:46.796529 kubelet[2867]: I0120 07:02:46.795635 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52lh2\" (UniqueName: \"kubernetes.io/projected/18f52096-e17f-46d5-a51d-1ae5ca49fd14-kube-api-access-52lh2\") pod \"calico-apiserver-6c495f47-n5kdv\" (UID: \"18f52096-e17f-46d5-a51d-1ae5ca49fd14\") " pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" Jan 20 07:02:46.796529 kubelet[2867]: I0120 07:02:46.795660 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kn6t\" (UniqueName: \"kubernetes.io/projected/48809565-7ef3-4c36-a2a9-e27dfb3fe63c-kube-api-access-2kn6t\") pod \"calico-kube-controllers-5c4c84c57-fbspp\" (UID: \"48809565-7ef3-4c36-a2a9-e27dfb3fe63c\") " pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" Jan 20 07:02:46.796529 kubelet[2867]: I0120 07:02:46.795677 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e65373ef-4973-4aa5-9425-ede746ebd364-config-volume\") pod \"coredns-674b8bbfcf-8vm6n\" (UID: \"e65373ef-4973-4aa5-9425-ede746ebd364\") " pod="kube-system/coredns-674b8bbfcf-8vm6n" Jan 20 07:02:46.796890 kubelet[2867]: I0120 07:02:46.795715 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhxz2\" (UniqueName: \"kubernetes.io/projected/add8a880-515a-44f3-9fed-8077d26ba5b6-kube-api-access-zhxz2\") pod \"calico-apiserver-6c495f47-pkkt5\" (UID: \"add8a880-515a-44f3-9fed-8077d26ba5b6\") " pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" Jan 20 07:02:46.797797 kubelet[2867]: I0120 07:02:46.797736 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdf2g\" (UniqueName: \"kubernetes.io/projected/445ffd04-d4f6-441f-8b52-ca096310b51a-kube-api-access-bdf2g\") pod \"whisker-8c9fb997c-5zc2h\" (UID: \"445ffd04-d4f6-441f-8b52-ca096310b51a\") " pod="calico-system/whisker-8c9fb997c-5zc2h" Jan 20 07:02:46.798765 kubelet[2867]: I0120 07:02:46.798225 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48809565-7ef3-4c36-a2a9-e27dfb3fe63c-tigera-ca-bundle\") pod \"calico-kube-controllers-5c4c84c57-fbspp\" (UID: \"48809565-7ef3-4c36-a2a9-e27dfb3fe63c\") " pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" Jan 20 07:02:46.799809 kubelet[2867]: E0120 07:02:46.799307 2867 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-232-7-121\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node '172-232-7-121' and this object" logger="UnhandledError" reflector="object-\"calico-apiserver\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Jan 20 07:02:46.809041 systemd[1]: Created slice kubepods-besteffort-pod48809565_7ef3_4c36_a2a9_e27dfb3fe63c.slice - libcontainer container kubepods-besteffort-pod48809565_7ef3_4c36_a2a9_e27dfb3fe63c.slice. Jan 20 07:02:46.823464 systemd[1]: Created slice kubepods-besteffort-podadd8a880_515a_44f3_9fed_8077d26ba5b6.slice - libcontainer container kubepods-besteffort-podadd8a880_515a_44f3_9fed_8077d26ba5b6.slice. Jan 20 07:02:46.838444 systemd[1]: Created slice kubepods-besteffort-pod18f52096_e17f_46d5_a51d_1ae5ca49fd14.slice - libcontainer container kubepods-besteffort-pod18f52096_e17f_46d5_a51d_1ae5ca49fd14.slice. Jan 20 07:02:46.855213 systemd[1]: Created slice kubepods-besteffort-pod930ba9b4_4a35_4f62_858d_858957a6d7e8.slice - libcontainer container kubepods-besteffort-pod930ba9b4_4a35_4f62_858d_858957a6d7e8.slice. Jan 20 07:02:46.863212 systemd[1]: Created slice kubepods-besteffort-pod445ffd04_d4f6_441f_8b52_ca096310b51a.slice - libcontainer container kubepods-besteffort-pod445ffd04_d4f6_441f_8b52_ca096310b51a.slice. Jan 20 07:02:46.898995 kubelet[2867]: I0120 07:02:46.898923 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/930ba9b4-4a35-4f62-858d-858957a6d7e8-goldmane-key-pair\") pod \"goldmane-666569f655-8krch\" (UID: \"930ba9b4-4a35-4f62-858d-858957a6d7e8\") " pod="calico-system/goldmane-666569f655-8krch" Jan 20 07:02:46.898995 kubelet[2867]: I0120 07:02:46.898966 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bljpx\" (UniqueName: \"kubernetes.io/projected/930ba9b4-4a35-4f62-858d-858957a6d7e8-kube-api-access-bljpx\") pod \"goldmane-666569f655-8krch\" (UID: \"930ba9b4-4a35-4f62-858d-858957a6d7e8\") " pod="calico-system/goldmane-666569f655-8krch" Jan 20 07:02:46.899345 kubelet[2867]: I0120 07:02:46.899043 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930ba9b4-4a35-4f62-858d-858957a6d7e8-config\") pod \"goldmane-666569f655-8krch\" (UID: \"930ba9b4-4a35-4f62-858d-858957a6d7e8\") " pod="calico-system/goldmane-666569f655-8krch" Jan 20 07:02:46.899345 kubelet[2867]: I0120 07:02:46.899092 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/930ba9b4-4a35-4f62-858d-858957a6d7e8-goldmane-ca-bundle\") pod \"goldmane-666569f655-8krch\" (UID: \"930ba9b4-4a35-4f62-858d-858957a6d7e8\") " pod="calico-system/goldmane-666569f655-8krch" Jan 20 07:02:47.085800 kubelet[2867]: E0120 07:02:47.085743 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:47.087253 containerd[1604]: time="2026-01-20T07:02:47.087218899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2dszx,Uid:2244b5fd-5821-4058-8f7d-13e543d8b404,Namespace:kube-system,Attempt:0,}" Jan 20 07:02:47.103855 kubelet[2867]: E0120 07:02:47.103816 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:47.106595 containerd[1604]: time="2026-01-20T07:02:47.105301316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8vm6n,Uid:e65373ef-4973-4aa5-9425-ede746ebd364,Namespace:kube-system,Attempt:0,}" Jan 20 07:02:47.135026 containerd[1604]: time="2026-01-20T07:02:47.134882829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c4c84c57-fbspp,Uid:48809565-7ef3-4c36-a2a9-e27dfb3fe63c,Namespace:calico-system,Attempt:0,}" Jan 20 07:02:47.162468 containerd[1604]: time="2026-01-20T07:02:47.162423326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8krch,Uid:930ba9b4-4a35-4f62-858d-858957a6d7e8,Namespace:calico-system,Attempt:0,}" Jan 20 07:02:47.169024 containerd[1604]: time="2026-01-20T07:02:47.168991691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c9fb997c-5zc2h,Uid:445ffd04-d4f6-441f-8b52-ca096310b51a,Namespace:calico-system,Attempt:0,}" Jan 20 07:02:47.270924 systemd[1]: Created slice kubepods-besteffort-poddd9207e8_fe1e_43a2_ab22_bb4ac860e560.slice - libcontainer container kubepods-besteffort-poddd9207e8_fe1e_43a2_ab22_bb4ac860e560.slice. Jan 20 07:02:47.276543 containerd[1604]: time="2026-01-20T07:02:47.276509186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w6jwn,Uid:dd9207e8-fe1e-43a2-ab22-bb4ac860e560,Namespace:calico-system,Attempt:0,}" Jan 20 07:02:47.374536 containerd[1604]: time="2026-01-20T07:02:47.374344821Z" level=error msg="Failed to destroy network for sandbox \"0fc46d5e9a046b42138973451714be3576ad6313670292fa3b5515545dce9cee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.377947 containerd[1604]: time="2026-01-20T07:02:47.377857449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8vm6n,Uid:e65373ef-4973-4aa5-9425-ede746ebd364,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fc46d5e9a046b42138973451714be3576ad6313670292fa3b5515545dce9cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.378893 kubelet[2867]: E0120 07:02:47.378225 2867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fc46d5e9a046b42138973451714be3576ad6313670292fa3b5515545dce9cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.378893 kubelet[2867]: E0120 07:02:47.378358 2867 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fc46d5e9a046b42138973451714be3576ad6313670292fa3b5515545dce9cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8vm6n" Jan 20 07:02:47.378893 kubelet[2867]: E0120 07:02:47.378414 2867 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fc46d5e9a046b42138973451714be3576ad6313670292fa3b5515545dce9cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8vm6n" Jan 20 07:02:47.381955 kubelet[2867]: E0120 07:02:47.381818 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8vm6n_kube-system(e65373ef-4973-4aa5-9425-ede746ebd364)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8vm6n_kube-system(e65373ef-4973-4aa5-9425-ede746ebd364)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fc46d5e9a046b42138973451714be3576ad6313670292fa3b5515545dce9cee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8vm6n" podUID="e65373ef-4973-4aa5-9425-ede746ebd364" Jan 20 07:02:47.390244 containerd[1604]: time="2026-01-20T07:02:47.390056714Z" level=error msg="Failed to destroy network for sandbox \"39b49c4b7f7d0bdad3b3c0485c35d8a202882fe7129ac7071e23e4ba0344148c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.401548 containerd[1604]: time="2026-01-20T07:02:47.401358258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2dszx,Uid:2244b5fd-5821-4058-8f7d-13e543d8b404,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39b49c4b7f7d0bdad3b3c0485c35d8a202882fe7129ac7071e23e4ba0344148c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.401954 kubelet[2867]: E0120 07:02:47.401906 2867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39b49c4b7f7d0bdad3b3c0485c35d8a202882fe7129ac7071e23e4ba0344148c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.402199 kubelet[2867]: E0120 07:02:47.402137 2867 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39b49c4b7f7d0bdad3b3c0485c35d8a202882fe7129ac7071e23e4ba0344148c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2dszx" Jan 20 07:02:47.402394 kubelet[2867]: E0120 07:02:47.402277 2867 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39b49c4b7f7d0bdad3b3c0485c35d8a202882fe7129ac7071e23e4ba0344148c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2dszx" Jan 20 07:02:47.403914 kubelet[2867]: E0120 07:02:47.402483 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2dszx_kube-system(2244b5fd-5821-4058-8f7d-13e543d8b404)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2dszx_kube-system(2244b5fd-5821-4058-8f7d-13e543d8b404)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39b49c4b7f7d0bdad3b3c0485c35d8a202882fe7129ac7071e23e4ba0344148c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2dszx" podUID="2244b5fd-5821-4058-8f7d-13e543d8b404" Jan 20 07:02:47.439304 containerd[1604]: time="2026-01-20T07:02:47.439212108Z" level=error msg="Failed to destroy network for sandbox \"dc82a816553fc635e1a5d5c555db9bab8122e2da4e0c264bf8253caccdbbfbef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.454969 containerd[1604]: time="2026-01-20T07:02:47.454745010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c4c84c57-fbspp,Uid:48809565-7ef3-4c36-a2a9-e27dfb3fe63c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc82a816553fc635e1a5d5c555db9bab8122e2da4e0c264bf8253caccdbbfbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.456076 kubelet[2867]: E0120 07:02:47.455843 2867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc82a816553fc635e1a5d5c555db9bab8122e2da4e0c264bf8253caccdbbfbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.456503 kubelet[2867]: E0120 07:02:47.456341 2867 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc82a816553fc635e1a5d5c555db9bab8122e2da4e0c264bf8253caccdbbfbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" Jan 20 07:02:47.456710 kubelet[2867]: E0120 07:02:47.456660 2867 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc82a816553fc635e1a5d5c555db9bab8122e2da4e0c264bf8253caccdbbfbef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" Jan 20 07:02:47.457249 kubelet[2867]: E0120 07:02:47.457121 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c4c84c57-fbspp_calico-system(48809565-7ef3-4c36-a2a9-e27dfb3fe63c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c4c84c57-fbspp_calico-system(48809565-7ef3-4c36-a2a9-e27dfb3fe63c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc82a816553fc635e1a5d5c555db9bab8122e2da4e0c264bf8253caccdbbfbef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" podUID="48809565-7ef3-4c36-a2a9-e27dfb3fe63c" Jan 20 07:02:47.468265 containerd[1604]: time="2026-01-20T07:02:47.468162408Z" level=error msg="Failed to destroy network for sandbox \"31c3609f4ab3c7cb1b1185fffb55a283ea26c0b9bab3d50080659056b16dcefd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.471627 containerd[1604]: time="2026-01-20T07:02:47.471463855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8krch,Uid:930ba9b4-4a35-4f62-858d-858957a6d7e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c3609f4ab3c7cb1b1185fffb55a283ea26c0b9bab3d50080659056b16dcefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.473778 kubelet[2867]: E0120 07:02:47.473584 2867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c3609f4ab3c7cb1b1185fffb55a283ea26c0b9bab3d50080659056b16dcefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.475033 kubelet[2867]: E0120 07:02:47.474862 2867 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c3609f4ab3c7cb1b1185fffb55a283ea26c0b9bab3d50080659056b16dcefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8krch" Jan 20 07:02:47.475429 kubelet[2867]: E0120 07:02:47.474965 2867 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c3609f4ab3c7cb1b1185fffb55a283ea26c0b9bab3d50080659056b16dcefd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8krch" Jan 20 07:02:47.475429 kubelet[2867]: E0120 07:02:47.475374 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-8krch_calico-system(930ba9b4-4a35-4f62-858d-858957a6d7e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-8krch_calico-system(930ba9b4-4a35-4f62-858d-858957a6d7e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31c3609f4ab3c7cb1b1185fffb55a283ea26c0b9bab3d50080659056b16dcefd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8" Jan 20 07:02:47.489888 containerd[1604]: time="2026-01-20T07:02:47.489798454Z" level=error msg="Failed to destroy network for sandbox \"cdfddb63a148279c1121b21ca6702147eab4a89ec31d49c2ab91409741045bbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.491870 containerd[1604]: time="2026-01-20T07:02:47.491690048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c9fb997c-5zc2h,Uid:445ffd04-d4f6-441f-8b52-ca096310b51a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdfddb63a148279c1121b21ca6702147eab4a89ec31d49c2ab91409741045bbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.492107 kubelet[2867]: E0120 07:02:47.492054 2867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdfddb63a148279c1121b21ca6702147eab4a89ec31d49c2ab91409741045bbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.493389 kubelet[2867]: E0120 07:02:47.492149 2867 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdfddb63a148279c1121b21ca6702147eab4a89ec31d49c2ab91409741045bbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8c9fb997c-5zc2h" Jan 20 07:02:47.493389 kubelet[2867]: E0120 07:02:47.492228 2867 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdfddb63a148279c1121b21ca6702147eab4a89ec31d49c2ab91409741045bbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8c9fb997c-5zc2h" Jan 20 07:02:47.493389 kubelet[2867]: E0120 07:02:47.492389 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8c9fb997c-5zc2h_calico-system(445ffd04-d4f6-441f-8b52-ca096310b51a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8c9fb997c-5zc2h_calico-system(445ffd04-d4f6-441f-8b52-ca096310b51a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdfddb63a148279c1121b21ca6702147eab4a89ec31d49c2ab91409741045bbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8c9fb997c-5zc2h" podUID="445ffd04-d4f6-441f-8b52-ca096310b51a" Jan 20 07:02:47.517223 containerd[1604]: time="2026-01-20T07:02:47.517112021Z" level=error msg="Failed to destroy network for sandbox \"065267192ed5f9a29445ca1304fc9ec8cbbd8e80ddb38763bb8ed365e1dc9722\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.519234 containerd[1604]: time="2026-01-20T07:02:47.519166106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w6jwn,Uid:dd9207e8-fe1e-43a2-ab22-bb4ac860e560,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"065267192ed5f9a29445ca1304fc9ec8cbbd8e80ddb38763bb8ed365e1dc9722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.519619 kubelet[2867]: E0120 07:02:47.519563 2867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"065267192ed5f9a29445ca1304fc9ec8cbbd8e80ddb38763bb8ed365e1dc9722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.519700 kubelet[2867]: E0120 07:02:47.519662 2867 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"065267192ed5f9a29445ca1304fc9ec8cbbd8e80ddb38763bb8ed365e1dc9722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w6jwn" Jan 20 07:02:47.519700 kubelet[2867]: E0120 07:02:47.519689 2867 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"065267192ed5f9a29445ca1304fc9ec8cbbd8e80ddb38763bb8ed365e1dc9722\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w6jwn" Jan 20 07:02:47.519808 kubelet[2867]: E0120 07:02:47.519778 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w6jwn_calico-system(dd9207e8-fe1e-43a2-ab22-bb4ac860e560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w6jwn_calico-system(dd9207e8-fe1e-43a2-ab22-bb4ac860e560)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"065267192ed5f9a29445ca1304fc9ec8cbbd8e80ddb38763bb8ed365e1dc9722\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:02:47.593502 kubelet[2867]: E0120 07:02:47.593437 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:47.598144 containerd[1604]: time="2026-01-20T07:02:47.597209359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 07:02:47.734022 containerd[1604]: time="2026-01-20T07:02:47.733808505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c495f47-pkkt5,Uid:add8a880-515a-44f3-9fed-8077d26ba5b6,Namespace:calico-apiserver,Attempt:0,}" Jan 20 07:02:47.756229 containerd[1604]: time="2026-01-20T07:02:47.755966722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c495f47-n5kdv,Uid:18f52096-e17f-46d5-a51d-1ae5ca49fd14,Namespace:calico-apiserver,Attempt:0,}" Jan 20 07:02:47.859478 containerd[1604]: time="2026-01-20T07:02:47.859344069Z" level=error msg="Failed to destroy network for sandbox \"8620dc3b02910f892f2d20522dfdf7d51a95497a584126b32050b9ba357270ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.863893 systemd[1]: run-netns-cni\x2d822b2e6f\x2d379d\x2df524\x2d1286\x2d7fab4f624ace.mount: Deactivated successfully. Jan 20 07:02:47.873004 containerd[1604]: time="2026-01-20T07:02:47.872873477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c495f47-pkkt5,Uid:add8a880-515a-44f3-9fed-8077d26ba5b6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8620dc3b02910f892f2d20522dfdf7d51a95497a584126b32050b9ba357270ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.874054 kubelet[2867]: E0120 07:02:47.873909 2867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8620dc3b02910f892f2d20522dfdf7d51a95497a584126b32050b9ba357270ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.874054 kubelet[2867]: E0120 07:02:47.874016 2867 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8620dc3b02910f892f2d20522dfdf7d51a95497a584126b32050b9ba357270ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" Jan 20 07:02:47.874923 kubelet[2867]: E0120 07:02:47.874049 2867 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8620dc3b02910f892f2d20522dfdf7d51a95497a584126b32050b9ba357270ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" Jan 20 07:02:47.875301 kubelet[2867]: E0120 07:02:47.875233 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c495f47-pkkt5_calico-apiserver(add8a880-515a-44f3-9fed-8077d26ba5b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c495f47-pkkt5_calico-apiserver(add8a880-515a-44f3-9fed-8077d26ba5b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8620dc3b02910f892f2d20522dfdf7d51a95497a584126b32050b9ba357270ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" podUID="add8a880-515a-44f3-9fed-8077d26ba5b6" Jan 20 07:02:47.883214 containerd[1604]: time="2026-01-20T07:02:47.880996184Z" level=error msg="Failed to destroy network for sandbox \"625b2f0f8cd07d9950d083cbc6a39d1acfca2910278468115ed2d1d9696ce24d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.886128 systemd[1]: run-netns-cni\x2d8c6a9e65\x2dfd55\x2da59c\x2deedf\x2da1d1839d1f7f.mount: Deactivated successfully. Jan 20 07:02:47.890223 containerd[1604]: time="2026-01-20T07:02:47.889073991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c495f47-n5kdv,Uid:18f52096-e17f-46d5-a51d-1ae5ca49fd14,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"625b2f0f8cd07d9950d083cbc6a39d1acfca2910278468115ed2d1d9696ce24d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.890708 kubelet[2867]: E0120 07:02:47.889744 2867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625b2f0f8cd07d9950d083cbc6a39d1acfca2910278468115ed2d1d9696ce24d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 07:02:47.890708 kubelet[2867]: E0120 07:02:47.889837 2867 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625b2f0f8cd07d9950d083cbc6a39d1acfca2910278468115ed2d1d9696ce24d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" Jan 20 07:02:47.890708 kubelet[2867]: E0120 07:02:47.889876 2867 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625b2f0f8cd07d9950d083cbc6a39d1acfca2910278468115ed2d1d9696ce24d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" Jan 20 07:02:47.890884 kubelet[2867]: E0120 07:02:47.889931 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c495f47-n5kdv_calico-apiserver(18f52096-e17f-46d5-a51d-1ae5ca49fd14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c495f47-n5kdv_calico-apiserver(18f52096-e17f-46d5-a51d-1ae5ca49fd14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"625b2f0f8cd07d9950d083cbc6a39d1acfca2910278468115ed2d1d9696ce24d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" podUID="18f52096-e17f-46d5-a51d-1ae5ca49fd14" Jan 20 07:02:55.511136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount623161596.mount: Deactivated successfully. Jan 20 07:02:55.552284 containerd[1604]: time="2026-01-20T07:02:55.552042872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:55.553696 containerd[1604]: time="2026-01-20T07:02:55.553668484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 20 07:02:55.555221 containerd[1604]: time="2026-01-20T07:02:55.554422205Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:55.557343 containerd[1604]: time="2026-01-20T07:02:55.557311250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 07:02:55.557777 containerd[1604]: time="2026-01-20T07:02:55.557745920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.960464911s" Jan 20 07:02:55.557860 containerd[1604]: time="2026-01-20T07:02:55.557804040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 07:02:55.601878 containerd[1604]: time="2026-01-20T07:02:55.601521574Z" level=info msg="CreateContainer within sandbox \"6c175be8e3499886c72c3d7549f5bd28a88f0161d01746298e264d8e61f84222\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 07:02:55.616028 containerd[1604]: time="2026-01-20T07:02:55.615987564Z" level=info msg="Container 664319a19b25a8bf909b428f6867a34fa7cbcec249b7b8a009612cf2a2916158: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:02:55.628748 containerd[1604]: time="2026-01-20T07:02:55.628694304Z" level=info msg="CreateContainer within sandbox \"6c175be8e3499886c72c3d7549f5bd28a88f0161d01746298e264d8e61f84222\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"664319a19b25a8bf909b428f6867a34fa7cbcec249b7b8a009612cf2a2916158\"" Jan 20 07:02:55.629764 containerd[1604]: time="2026-01-20T07:02:55.629735805Z" level=info msg="StartContainer for \"664319a19b25a8bf909b428f6867a34fa7cbcec249b7b8a009612cf2a2916158\"" Jan 20 07:02:55.631882 containerd[1604]: time="2026-01-20T07:02:55.631842068Z" level=info msg="connecting to shim 664319a19b25a8bf909b428f6867a34fa7cbcec249b7b8a009612cf2a2916158" address="unix:///run/containerd/s/9914c6440b77288d351688d1a2633f10b936fe692d208d104a87395efb9a8062" protocol=ttrpc version=3 Jan 20 07:02:55.712539 systemd[1]: Started cri-containerd-664319a19b25a8bf909b428f6867a34fa7cbcec249b7b8a009612cf2a2916158.scope - libcontainer container 664319a19b25a8bf909b428f6867a34fa7cbcec249b7b8a009612cf2a2916158. Jan 20 07:02:55.797530 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 20 07:02:55.797967 kernel: audit: type=1334 audit(1768892575.791:577): prog-id=178 op=LOAD Jan 20 07:02:55.791000 audit: BPF prog-id=178 op=LOAD Jan 20 07:02:55.791000 audit[3814]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3375 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:55.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636343331396131396232356138626639303962343238663638363761 Jan 20 07:02:55.839893 kernel: audit: type=1300 audit(1768892575.791:577): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3375 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:55.840158 kernel: audit: type=1327 audit(1768892575.791:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636343331396131396232356138626639303962343238663638363761 Jan 20 07:02:55.840237 kernel: audit: type=1334 audit(1768892575.795:578): prog-id=179 op=LOAD Jan 20 07:02:55.840285 kernel: audit: type=1300 audit(1768892575.795:578): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3375 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:55.840320 kernel: audit: type=1327 audit(1768892575.795:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636343331396131396232356138626639303962343238663638363761 Jan 20 07:02:55.840375 kernel: audit: type=1334 audit(1768892575.795:579): prog-id=179 op=UNLOAD Jan 20 07:02:55.840419 kernel: audit: type=1300 audit(1768892575.795:579): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3375 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:55.795000 audit: BPF prog-id=179 op=LOAD Jan 20 07:02:55.795000 audit[3814]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3375 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:55.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636343331396131396232356138626639303962343238663638363761 Jan 20 07:02:55.795000 audit: BPF prog-id=179 op=UNLOAD Jan 20 07:02:55.795000 audit[3814]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3375 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:55.846365 kernel: audit: type=1327 audit(1768892575.795:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636343331396131396232356138626639303962343238663638363761 Jan 20 07:02:55.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636343331396131396232356138626639303962343238663638363761 Jan 20 07:02:55.855491 kernel: audit: type=1334 audit(1768892575.795:580): prog-id=178 op=UNLOAD Jan 20 07:02:55.795000 audit: BPF prog-id=178 op=UNLOAD Jan 20 07:02:55.795000 audit[3814]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3375 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:55.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636343331396131396232356138626639303962343238663638363761 Jan 20 07:02:55.795000 audit: BPF prog-id=180 op=LOAD Jan 20 07:02:55.795000 audit[3814]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3375 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:55.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636343331396131396232356138626639303962343238663638363761 Jan 20 07:02:55.886275 containerd[1604]: time="2026-01-20T07:02:55.886232047Z" level=info msg="StartContainer for \"664319a19b25a8bf909b428f6867a34fa7cbcec249b7b8a009612cf2a2916158\" returns successfully" Jan 20 07:02:56.122369 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 07:02:56.122510 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 07:02:56.607991 kubelet[2867]: I0120 07:02:56.607566 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdf2g\" (UniqueName: \"kubernetes.io/projected/445ffd04-d4f6-441f-8b52-ca096310b51a-kube-api-access-bdf2g\") pod \"445ffd04-d4f6-441f-8b52-ca096310b51a\" (UID: \"445ffd04-d4f6-441f-8b52-ca096310b51a\") " Jan 20 07:02:56.611787 kubelet[2867]: I0120 07:02:56.608399 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/445ffd04-d4f6-441f-8b52-ca096310b51a-whisker-backend-key-pair\") pod \"445ffd04-d4f6-441f-8b52-ca096310b51a\" (UID: \"445ffd04-d4f6-441f-8b52-ca096310b51a\") " Jan 20 07:02:56.611787 kubelet[2867]: I0120 07:02:56.608430 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/445ffd04-d4f6-441f-8b52-ca096310b51a-whisker-ca-bundle\") pod \"445ffd04-d4f6-441f-8b52-ca096310b51a\" (UID: \"445ffd04-d4f6-441f-8b52-ca096310b51a\") " Jan 20 07:02:56.611787 kubelet[2867]: I0120 07:02:56.609342 2867 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/445ffd04-d4f6-441f-8b52-ca096310b51a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "445ffd04-d4f6-441f-8b52-ca096310b51a" (UID: "445ffd04-d4f6-441f-8b52-ca096310b51a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 07:02:56.623479 systemd[1]: var-lib-kubelet-pods-445ffd04\x2dd4f6\x2d441f\x2d8b52\x2dca096310b51a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbdf2g.mount: Deactivated successfully. Jan 20 07:02:56.629725 kubelet[2867]: I0120 07:02:56.627151 2867 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/445ffd04-d4f6-441f-8b52-ca096310b51a-kube-api-access-bdf2g" (OuterVolumeSpecName: "kube-api-access-bdf2g") pod "445ffd04-d4f6-441f-8b52-ca096310b51a" (UID: "445ffd04-d4f6-441f-8b52-ca096310b51a"). InnerVolumeSpecName "kube-api-access-bdf2g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 07:02:56.629892 systemd[1]: var-lib-kubelet-pods-445ffd04\x2dd4f6\x2d441f\x2d8b52\x2dca096310b51a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 07:02:56.631260 kubelet[2867]: I0120 07:02:56.630997 2867 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445ffd04-d4f6-441f-8b52-ca096310b51a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "445ffd04-d4f6-441f-8b52-ca096310b51a" (UID: "445ffd04-d4f6-441f-8b52-ca096310b51a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 07:02:56.669117 kubelet[2867]: E0120 07:02:56.669062 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:56.698068 systemd[1]: Removed slice kubepods-besteffort-pod445ffd04_d4f6_441f_8b52_ca096310b51a.slice - libcontainer container kubepods-besteffort-pod445ffd04_d4f6_441f_8b52_ca096310b51a.slice. Jan 20 07:02:56.714234 kubelet[2867]: I0120 07:02:56.712461 2867 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bdf2g\" (UniqueName: \"kubernetes.io/projected/445ffd04-d4f6-441f-8b52-ca096310b51a-kube-api-access-bdf2g\") on node \"172-232-7-121\" DevicePath \"\"" Jan 20 07:02:56.714370 kubelet[2867]: I0120 07:02:56.714254 2867 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/445ffd04-d4f6-441f-8b52-ca096310b51a-whisker-backend-key-pair\") on node \"172-232-7-121\" DevicePath \"\"" Jan 20 07:02:56.714370 kubelet[2867]: I0120 07:02:56.714270 2867 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/445ffd04-d4f6-441f-8b52-ca096310b51a-whisker-ca-bundle\") on node \"172-232-7-121\" DevicePath \"\"" Jan 20 07:02:56.720277 kubelet[2867]: I0120 07:02:56.720091 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n2nks" podStartSLOduration=2.8623694200000003 podStartE2EDuration="24.719646336s" podCreationTimestamp="2026-01-20 07:02:32 +0000 UTC" firstStartedPulling="2026-01-20 07:02:33.702106917 +0000 UTC m=+26.732780462" lastFinishedPulling="2026-01-20 07:02:55.559383843 +0000 UTC m=+48.590057378" observedRunningTime="2026-01-20 07:02:56.713010446 +0000 UTC m=+49.743683991" watchObservedRunningTime="2026-01-20 07:02:56.719646336 +0000 UTC m=+49.750319871" Jan 20 07:02:56.814981 kubelet[2867]: I0120 07:02:56.814876 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32ea1474-9105-468a-bde8-3bef92925725-whisker-ca-bundle\") pod \"whisker-547dfbb977-shb9n\" (UID: \"32ea1474-9105-468a-bde8-3bef92925725\") " pod="calico-system/whisker-547dfbb977-shb9n" Jan 20 07:02:56.814981 kubelet[2867]: I0120 07:02:56.814975 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/32ea1474-9105-468a-bde8-3bef92925725-whisker-backend-key-pair\") pod \"whisker-547dfbb977-shb9n\" (UID: \"32ea1474-9105-468a-bde8-3bef92925725\") " pod="calico-system/whisker-547dfbb977-shb9n" Jan 20 07:02:56.814981 kubelet[2867]: I0120 07:02:56.814993 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rbc4\" (UniqueName: \"kubernetes.io/projected/32ea1474-9105-468a-bde8-3bef92925725-kube-api-access-5rbc4\") pod \"whisker-547dfbb977-shb9n\" (UID: \"32ea1474-9105-468a-bde8-3bef92925725\") " pod="calico-system/whisker-547dfbb977-shb9n" Jan 20 07:02:56.820559 systemd[1]: Created slice kubepods-besteffort-pod32ea1474_9105_468a_bde8_3bef92925725.slice - libcontainer container kubepods-besteffort-pod32ea1474_9105_468a_bde8_3bef92925725.slice. Jan 20 07:02:57.128932 containerd[1604]: time="2026-01-20T07:02:57.128827139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-547dfbb977-shb9n,Uid:32ea1474-9105-468a-bde8-3bef92925725,Namespace:calico-system,Attempt:0,}" Jan 20 07:02:57.253463 kubelet[2867]: I0120 07:02:57.253358 2867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="445ffd04-d4f6-441f-8b52-ca096310b51a" path="/var/lib/kubelet/pods/445ffd04-d4f6-441f-8b52-ca096310b51a/volumes" Jan 20 07:02:57.347047 systemd-networkd[1498]: cali6dce84d3f72: Link UP Jan 20 07:02:57.349706 systemd-networkd[1498]: cali6dce84d3f72: Gained carrier Jan 20 07:02:57.372325 containerd[1604]: 2026-01-20 07:02:57.175 [INFO][3905] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 07:02:57.372325 containerd[1604]: 2026-01-20 07:02:57.229 [INFO][3905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0 whisker-547dfbb977- calico-system 32ea1474-9105-468a-bde8-3bef92925725 981 0 2026-01-20 07:02:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:547dfbb977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-232-7-121 whisker-547dfbb977-shb9n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6dce84d3f72 [] [] }} ContainerID="c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" Namespace="calico-system" Pod="whisker-547dfbb977-shb9n" WorkloadEndpoint="172--232--7--121-k8s-whisker--547dfbb977--shb9n-" Jan 20 07:02:57.372325 containerd[1604]: 2026-01-20 07:02:57.230 [INFO][3905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" Namespace="calico-system" Pod="whisker-547dfbb977-shb9n" WorkloadEndpoint="172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0" Jan 20 07:02:57.372325 containerd[1604]: 2026-01-20 07:02:57.282 [INFO][3917] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" HandleID="k8s-pod-network.c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" Workload="172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0" Jan 20 07:02:57.372617 containerd[1604]: 2026-01-20 07:02:57.283 [INFO][3917] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" HandleID="k8s-pod-network.c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" Workload="172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-7-121", "pod":"whisker-547dfbb977-shb9n", "timestamp":"2026-01-20 07:02:57.282495875 +0000 UTC"}, Hostname:"172-232-7-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 07:02:57.372617 containerd[1604]: 2026-01-20 07:02:57.283 [INFO][3917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 07:02:57.372617 containerd[1604]: 2026-01-20 07:02:57.283 [INFO][3917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 07:02:57.372617 containerd[1604]: 2026-01-20 07:02:57.283 [INFO][3917] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-121' Jan 20 07:02:57.372617 containerd[1604]: 2026-01-20 07:02:57.292 [INFO][3917] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" host="172-232-7-121" Jan 20 07:02:57.372617 containerd[1604]: 2026-01-20 07:02:57.298 [INFO][3917] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-121" Jan 20 07:02:57.372617 containerd[1604]: 2026-01-20 07:02:57.303 [INFO][3917] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="172-232-7-121" Jan 20 07:02:57.372617 containerd[1604]: 2026-01-20 07:02:57.305 [INFO][3917] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:02:57.372617 containerd[1604]: 2026-01-20 07:02:57.308 [INFO][3917] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:02:57.372617 containerd[1604]: 2026-01-20 07:02:57.309 [INFO][3917] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" host="172-232-7-121" Jan 20 07:02:57.373110 containerd[1604]: 2026-01-20 07:02:57.312 [INFO][3917] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6 Jan 20 07:02:57.373110 containerd[1604]: 2026-01-20 07:02:57.318 [INFO][3917] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" host="172-232-7-121" Jan 20 07:02:57.373110 containerd[1604]: 2026-01-20 07:02:57.323 [INFO][3917] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.129/26] block=192.168.82.128/26 handle="k8s-pod-network.c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" host="172-232-7-121" Jan 20 07:02:57.373110 containerd[1604]: 2026-01-20 07:02:57.323 [INFO][3917] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.129/26] handle="k8s-pod-network.c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" host="172-232-7-121" Jan 20 07:02:57.373110 containerd[1604]: 2026-01-20 07:02:57.323 [INFO][3917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 07:02:57.373110 containerd[1604]: 2026-01-20 07:02:57.323 [INFO][3917] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.129/26] IPv6=[] ContainerID="c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" HandleID="k8s-pod-network.c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" Workload="172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0" Jan 20 07:02:57.373392 containerd[1604]: 2026-01-20 07:02:57.327 [INFO][3905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" Namespace="calico-system" Pod="whisker-547dfbb977-shb9n" WorkloadEndpoint="172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0", GenerateName:"whisker-547dfbb977-", Namespace:"calico-system", SelfLink:"", UID:"32ea1474-9105-468a-bde8-3bef92925725", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"547dfbb977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"", Pod:"whisker-547dfbb977-shb9n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6dce84d3f72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:02:57.373392 containerd[1604]: 2026-01-20 07:02:57.328 [INFO][3905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.129/32] ContainerID="c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" Namespace="calico-system" Pod="whisker-547dfbb977-shb9n" WorkloadEndpoint="172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0" Jan 20 07:02:57.373711 containerd[1604]: 2026-01-20 07:02:57.328 [INFO][3905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6dce84d3f72 ContainerID="c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" Namespace="calico-system" Pod="whisker-547dfbb977-shb9n" WorkloadEndpoint="172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0" Jan 20 07:02:57.373711 containerd[1604]: 2026-01-20 07:02:57.345 [INFO][3905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" Namespace="calico-system" Pod="whisker-547dfbb977-shb9n" WorkloadEndpoint="172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0" Jan 20 07:02:57.373769 containerd[1604]: 2026-01-20 07:02:57.349 [INFO][3905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" Namespace="calico-system" Pod="whisker-547dfbb977-shb9n" WorkloadEndpoint="172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0", GenerateName:"whisker-547dfbb977-", Namespace:"calico-system", SelfLink:"", UID:"32ea1474-9105-468a-bde8-3bef92925725", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"547dfbb977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6", Pod:"whisker-547dfbb977-shb9n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6dce84d3f72", MAC:"ae:52:e5:80:fe:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:02:57.373829 containerd[1604]: 2026-01-20 07:02:57.362 [INFO][3905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" Namespace="calico-system" Pod="whisker-547dfbb977-shb9n" WorkloadEndpoint="172--232--7--121-k8s-whisker--547dfbb977--shb9n-eth0" Jan 20 07:02:57.443512 containerd[1604]: time="2026-01-20T07:02:57.443333330Z" level=info msg="connecting to shim c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6" address="unix:///run/containerd/s/74879a27e0e4ab4661987f25d7b11c15faaa5cbf76fb682e3c5cb8cf6c94d474" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:02:57.498440 systemd[1]: Started cri-containerd-c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6.scope - libcontainer container c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6. Jan 20 07:02:57.531000 audit: BPF prog-id=181 op=LOAD Jan 20 07:02:57.532000 audit: BPF prog-id=182 op=LOAD Jan 20 07:02:57.532000 audit[3949]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3938 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:57.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393335626236623364396237343839353634383832383831326533 Jan 20 07:02:57.532000 audit: BPF prog-id=182 op=UNLOAD Jan 20 07:02:57.532000 audit[3949]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3938 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:57.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393335626236623364396237343839353634383832383831326533 Jan 20 07:02:57.533000 audit: BPF prog-id=183 op=LOAD Jan 20 07:02:57.533000 audit[3949]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3938 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:57.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393335626236623364396237343839353634383832383831326533 Jan 20 07:02:57.533000 audit: BPF prog-id=184 op=LOAD Jan 20 07:02:57.533000 audit[3949]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3938 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:57.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393335626236623364396237343839353634383832383831326533 Jan 20 07:02:57.533000 audit: BPF prog-id=184 op=UNLOAD Jan 20 07:02:57.533000 audit[3949]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3938 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:57.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393335626236623364396237343839353634383832383831326533 Jan 20 07:02:57.533000 audit: BPF prog-id=183 op=UNLOAD Jan 20 07:02:57.533000 audit[3949]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3938 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:57.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393335626236623364396237343839353634383832383831326533 Jan 20 07:02:57.533000 audit: BPF prog-id=185 op=LOAD Jan 20 07:02:57.533000 audit[3949]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3938 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:57.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393335626236623364396237343839353634383832383831326533 Jan 20 07:02:57.591295 containerd[1604]: time="2026-01-20T07:02:57.591220088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-547dfbb977-shb9n,Uid:32ea1474-9105-468a-bde8-3bef92925725,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4935bb6b3d9b74895648828812e3b31371fadbdebdd0097d82749f325738fd6\"" Jan 20 07:02:57.596081 containerd[1604]: time="2026-01-20T07:02:57.596046535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 07:02:57.672391 kubelet[2867]: E0120 07:02:57.672324 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:02:57.732483 containerd[1604]: time="2026-01-20T07:02:57.732232337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:02:57.734816 containerd[1604]: time="2026-01-20T07:02:57.734785910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 07:02:57.734816 containerd[1604]: time="2026-01-20T07:02:57.734865210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 07:02:57.735662 kubelet[2867]: E0120 07:02:57.735521 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 07:02:57.735662 kubelet[2867]: E0120 07:02:57.735613 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 07:02:57.742160 kubelet[2867]: E0120 07:02:57.742080 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed0c70bbfbc248a6a5f52e94a287b2e5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rbc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547dfbb977-shb9n_calico-system(32ea1474-9105-468a-bde8-3bef92925725): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 07:02:57.745136 containerd[1604]: time="2026-01-20T07:02:57.745048944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 07:02:57.874097 containerd[1604]: time="2026-01-20T07:02:57.874039446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:02:57.875088 containerd[1604]: time="2026-01-20T07:02:57.875056568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 07:02:57.875172 containerd[1604]: time="2026-01-20T07:02:57.875139198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 07:02:57.875425 kubelet[2867]: E0120 07:02:57.875370 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 07:02:57.875524 kubelet[2867]: E0120 07:02:57.875433 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 07:02:57.875684 kubelet[2867]: E0120 07:02:57.875597 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rbc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547dfbb977-shb9n_calico-system(32ea1474-9105-468a-bde8-3bef92925725): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 07:02:57.877363 kubelet[2867]: E0120 07:02:57.877304 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547dfbb977-shb9n" podUID="32ea1474-9105-468a-bde8-3bef92925725" Jan 20 07:02:58.677275 kubelet[2867]: E0120 07:02:58.677206 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547dfbb977-shb9n" podUID="32ea1474-9105-468a-bde8-3bef92925725" Jan 20 07:02:58.727000 audit[4086]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:58.727000 audit[4086]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff789b0140 a2=0 a3=7fff789b012c items=0 ppid=3020 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:58.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:58.736000 audit[4086]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:02:58.736000 audit[4086]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff789b0140 a2=0 a3=0 items=0 ppid=3020 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:02:58.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:02:58.978517 systemd-networkd[1498]: cali6dce84d3f72: Gained IPv6LL Jan 20 07:03:00.119210 kubelet[2867]: I0120 07:03:00.118463 2867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 07:03:00.120651 kubelet[2867]: E0120 07:03:00.120514 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:00.245894 kubelet[2867]: E0120 07:03:00.245802 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:00.247030 containerd[1604]: time="2026-01-20T07:03:00.246379881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8krch,Uid:930ba9b4-4a35-4f62-858d-858957a6d7e8,Namespace:calico-system,Attempt:0,}" Jan 20 07:03:00.249799 containerd[1604]: time="2026-01-20T07:03:00.249326454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2dszx,Uid:2244b5fd-5821-4058-8f7d-13e543d8b404,Namespace:kube-system,Attempt:0,}" Jan 20 07:03:00.325000 audit[4135]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:00.325000 audit[4135]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd96cc6be0 a2=0 a3=7ffd96cc6bcc items=0 ppid=3020 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.325000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:00.336000 audit[4135]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:00.336000 audit[4135]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd96cc6be0 a2=0 a3=7ffd96cc6bcc items=0 ppid=3020 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.336000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:00.492506 systemd-networkd[1498]: cali7c2a4982ef6: Link UP Jan 20 07:03:00.495761 systemd-networkd[1498]: cali7c2a4982ef6: Gained carrier Jan 20 07:03:00.525316 containerd[1604]: 2026-01-20 07:03:00.352 [INFO][4118] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 07:03:00.525316 containerd[1604]: 2026-01-20 07:03:00.375 [INFO][4118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0 coredns-674b8bbfcf- kube-system 2244b5fd-5821-4058-8f7d-13e543d8b404 905 0 2026-01-20 07:02:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-7-121 coredns-674b8bbfcf-2dszx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7c2a4982ef6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" Namespace="kube-system" Pod="coredns-674b8bbfcf-2dszx" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-" Jan 20 07:03:00.525316 containerd[1604]: 2026-01-20 07:03:00.376 [INFO][4118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" Namespace="kube-system" Pod="coredns-674b8bbfcf-2dszx" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0" Jan 20 07:03:00.525316 containerd[1604]: 2026-01-20 07:03:00.433 [INFO][4142] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" HandleID="k8s-pod-network.2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" Workload="172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0" Jan 20 07:03:00.525636 containerd[1604]: 2026-01-20 07:03:00.433 [INFO][4142] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" HandleID="k8s-pod-network.2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" Workload="172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5910), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-7-121", "pod":"coredns-674b8bbfcf-2dszx", "timestamp":"2026-01-20 07:03:00.433370243 +0000 UTC"}, Hostname:"172-232-7-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 07:03:00.525636 containerd[1604]: 2026-01-20 07:03:00.433 [INFO][4142] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 07:03:00.525636 containerd[1604]: 2026-01-20 07:03:00.433 [INFO][4142] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 07:03:00.525636 containerd[1604]: 2026-01-20 07:03:00.433 [INFO][4142] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-121' Jan 20 07:03:00.525636 containerd[1604]: 2026-01-20 07:03:00.441 [INFO][4142] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" host="172-232-7-121" Jan 20 07:03:00.525636 containerd[1604]: 2026-01-20 07:03:00.447 [INFO][4142] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-121" Jan 20 07:03:00.525636 containerd[1604]: 2026-01-20 07:03:00.453 [INFO][4142] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:00.525636 containerd[1604]: 2026-01-20 07:03:00.456 [INFO][4142] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:00.525636 containerd[1604]: 2026-01-20 07:03:00.459 [INFO][4142] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:00.525636 containerd[1604]: 2026-01-20 07:03:00.459 [INFO][4142] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" host="172-232-7-121" Jan 20 07:03:00.525963 containerd[1604]: 2026-01-20 07:03:00.461 [INFO][4142] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be Jan 20 07:03:00.525963 containerd[1604]: 2026-01-20 07:03:00.468 [INFO][4142] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" host="172-232-7-121" Jan 20 07:03:00.525963 containerd[1604]: 2026-01-20 07:03:00.475 [INFO][4142] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.130/26] block=192.168.82.128/26 handle="k8s-pod-network.2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" host="172-232-7-121" Jan 20 07:03:00.525963 containerd[1604]: 2026-01-20 07:03:00.475 [INFO][4142] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.130/26] handle="k8s-pod-network.2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" host="172-232-7-121" Jan 20 07:03:00.525963 containerd[1604]: 2026-01-20 07:03:00.475 [INFO][4142] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 07:03:00.525963 containerd[1604]: 2026-01-20 07:03:00.475 [INFO][4142] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.130/26] IPv6=[] ContainerID="2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" HandleID="k8s-pod-network.2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" Workload="172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0" Jan 20 07:03:00.526161 containerd[1604]: 2026-01-20 07:03:00.479 [INFO][4118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" Namespace="kube-system" Pod="coredns-674b8bbfcf-2dszx" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2244b5fd-5821-4058-8f7d-13e543d8b404", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"", Pod:"coredns-674b8bbfcf-2dszx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c2a4982ef6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:00.526161 containerd[1604]: 2026-01-20 07:03:00.479 [INFO][4118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.130/32] ContainerID="2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" Namespace="kube-system" Pod="coredns-674b8bbfcf-2dszx" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0" Jan 20 07:03:00.526161 containerd[1604]: 2026-01-20 07:03:00.481 [INFO][4118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c2a4982ef6 ContainerID="2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" Namespace="kube-system" Pod="coredns-674b8bbfcf-2dszx" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0" Jan 20 07:03:00.526161 containerd[1604]: 2026-01-20 07:03:00.501 [INFO][4118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" Namespace="kube-system" Pod="coredns-674b8bbfcf-2dszx" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0" Jan 20 07:03:00.526161 containerd[1604]: 2026-01-20 07:03:00.501 [INFO][4118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" Namespace="kube-system" Pod="coredns-674b8bbfcf-2dszx" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2244b5fd-5821-4058-8f7d-13e543d8b404", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be", Pod:"coredns-674b8bbfcf-2dszx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c2a4982ef6", MAC:"3e:a1:38:92:a2:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:00.526161 containerd[1604]: 2026-01-20 07:03:00.518 [INFO][4118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" Namespace="kube-system" Pod="coredns-674b8bbfcf-2dszx" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--2dszx-eth0" Jan 20 07:03:00.562921 containerd[1604]: time="2026-01-20T07:03:00.562736887Z" level=info msg="connecting to shim 2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be" address="unix:///run/containerd/s/cc6a8eba777e090d0d8154a5ad72c3ea6fed48d0dc4d4df28cc91be7cbb88362" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:03:00.610829 systemd-networkd[1498]: cali97252e80624: Link UP Jan 20 07:03:00.612501 systemd-networkd[1498]: cali97252e80624: Gained carrier Jan 20 07:03:00.645417 systemd[1]: Started cri-containerd-2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be.scope - libcontainer container 2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be. Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.367 [INFO][4115] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.403 [INFO][4115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--121-k8s-goldmane--666569f655--8krch-eth0 goldmane-666569f655- calico-system 930ba9b4-4a35-4f62-858d-858957a6d7e8 914 0 2026-01-20 07:02:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-232-7-121 goldmane-666569f655-8krch eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali97252e80624 [] [] }} ContainerID="1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" Namespace="calico-system" Pod="goldmane-666569f655-8krch" WorkloadEndpoint="172--232--7--121-k8s-goldmane--666569f655--8krch-" Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.403 [INFO][4115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" Namespace="calico-system" Pod="goldmane-666569f655-8krch" WorkloadEndpoint="172--232--7--121-k8s-goldmane--666569f655--8krch-eth0" Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.451 [INFO][4147] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" HandleID="k8s-pod-network.1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" Workload="172--232--7--121-k8s-goldmane--666569f655--8krch-eth0" Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.452 [INFO][4147] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" HandleID="k8s-pod-network.1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" Workload="172--232--7--121-k8s-goldmane--666569f655--8krch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-7-121", "pod":"goldmane-666569f655-8krch", "timestamp":"2026-01-20 07:03:00.451411285 +0000 UTC"}, Hostname:"172-232-7-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.452 [INFO][4147] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.475 [INFO][4147] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.475 [INFO][4147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-121' Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.544 [INFO][4147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" host="172-232-7-121" Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.558 [INFO][4147] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-121" Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.565 [INFO][4147] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.568 [INFO][4147] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.572 [INFO][4147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.572 [INFO][4147] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" host="172-232-7-121" Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.574 [INFO][4147] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0 Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.581 [INFO][4147] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" host="172-232-7-121" Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.595 [INFO][4147] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.131/26] block=192.168.82.128/26 handle="k8s-pod-network.1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" host="172-232-7-121" Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.595 [INFO][4147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.131/26] handle="k8s-pod-network.1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" host="172-232-7-121" Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.595 [INFO][4147] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 07:03:00.648891 containerd[1604]: 2026-01-20 07:03:00.595 [INFO][4147] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.131/26] IPv6=[] ContainerID="1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" HandleID="k8s-pod-network.1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" Workload="172--232--7--121-k8s-goldmane--666569f655--8krch-eth0" Jan 20 07:03:00.650475 containerd[1604]: 2026-01-20 07:03:00.601 [INFO][4115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" Namespace="calico-system" Pod="goldmane-666569f655-8krch" WorkloadEndpoint="172--232--7--121-k8s-goldmane--666569f655--8krch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-goldmane--666569f655--8krch-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"930ba9b4-4a35-4f62-858d-858957a6d7e8", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"", Pod:"goldmane-666569f655-8krch", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97252e80624", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:00.650475 containerd[1604]: 2026-01-20 07:03:00.604 [INFO][4115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.131/32] ContainerID="1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" Namespace="calico-system" Pod="goldmane-666569f655-8krch" WorkloadEndpoint="172--232--7--121-k8s-goldmane--666569f655--8krch-eth0" Jan 20 07:03:00.650475 containerd[1604]: 2026-01-20 07:03:00.604 [INFO][4115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97252e80624 ContainerID="1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" Namespace="calico-system" Pod="goldmane-666569f655-8krch" WorkloadEndpoint="172--232--7--121-k8s-goldmane--666569f655--8krch-eth0" Jan 20 07:03:00.650475 containerd[1604]: 2026-01-20 07:03:00.616 [INFO][4115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" Namespace="calico-system" Pod="goldmane-666569f655-8krch" WorkloadEndpoint="172--232--7--121-k8s-goldmane--666569f655--8krch-eth0" Jan 20 07:03:00.650475 containerd[1604]: 2026-01-20 07:03:00.617 [INFO][4115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" Namespace="calico-system" Pod="goldmane-666569f655-8krch" WorkloadEndpoint="172--232--7--121-k8s-goldmane--666569f655--8krch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-goldmane--666569f655--8krch-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"930ba9b4-4a35-4f62-858d-858957a6d7e8", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0", Pod:"goldmane-666569f655-8krch", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97252e80624", MAC:"3a:df:35:ea:d4:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:00.650475 containerd[1604]: 2026-01-20 07:03:00.639 [INFO][4115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" Namespace="calico-system" Pod="goldmane-666569f655-8krch" WorkloadEndpoint="172--232--7--121-k8s-goldmane--666569f655--8krch-eth0" Jan 20 07:03:00.669000 audit: BPF prog-id=186 op=LOAD Jan 20 07:03:00.673000 audit: BPF prog-id=187 op=LOAD Jan 20 07:03:00.673000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4173 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235333065353239323434373063646135346637663836363639643935 Jan 20 07:03:00.673000 audit: BPF prog-id=187 op=UNLOAD Jan 20 07:03:00.673000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4173 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235333065353239323434373063646135346637663836363639643935 Jan 20 07:03:00.674000 audit: BPF prog-id=188 op=LOAD Jan 20 07:03:00.674000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4173 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235333065353239323434373063646135346637663836363639643935 Jan 20 07:03:00.674000 audit: BPF prog-id=189 op=LOAD Jan 20 07:03:00.674000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4173 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235333065353239323434373063646135346637663836363639643935 Jan 20 07:03:00.674000 audit: BPF prog-id=189 op=UNLOAD Jan 20 07:03:00.674000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4173 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235333065353239323434373063646135346637663836363639643935 Jan 20 07:03:00.674000 audit: BPF prog-id=188 op=UNLOAD Jan 20 07:03:00.674000 audit[4184]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4173 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235333065353239323434373063646135346637663836363639643935 Jan 20 07:03:00.675000 audit: BPF prog-id=190 op=LOAD Jan 20 07:03:00.675000 audit[4184]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4173 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235333065353239323434373063646135346637663836363639643935 Jan 20 07:03:00.696718 kubelet[2867]: E0120 07:03:00.696426 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:00.716046 containerd[1604]: time="2026-01-20T07:03:00.715973089Z" level=info msg="connecting to shim 1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0" address="unix:///run/containerd/s/861ac0d818e92702b9fe976d044c38d268428a1e743706ff06259e12e771c309" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:03:00.773267 containerd[1604]: time="2026-01-20T07:03:00.772249567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2dszx,Uid:2244b5fd-5821-4058-8f7d-13e543d8b404,Namespace:kube-system,Attempt:0,} returns sandbox id \"2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be\"" Jan 20 07:03:00.776080 kubelet[2867]: E0120 07:03:00.776023 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:00.786641 systemd[1]: Started cri-containerd-1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0.scope - libcontainer container 1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0. Jan 20 07:03:00.792158 containerd[1604]: time="2026-01-20T07:03:00.791730549Z" level=info msg="CreateContainer within sandbox \"2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 07:03:00.814220 containerd[1604]: time="2026-01-20T07:03:00.814145726Z" level=info msg="Container c2d7a47da83d308d96a4dc8130f10e690d4f319dfa787d773086e1c0e5782b4c: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:03:00.820317 containerd[1604]: time="2026-01-20T07:03:00.820249833Z" level=info msg="CreateContainer within sandbox \"2530e52924470cda54f7f86669d95c1bd13554f7c5a1be35ade00fa08b5704be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2d7a47da83d308d96a4dc8130f10e690d4f319dfa787d773086e1c0e5782b4c\"" Jan 20 07:03:00.822371 containerd[1604]: time="2026-01-20T07:03:00.822323696Z" level=info msg="StartContainer for \"c2d7a47da83d308d96a4dc8130f10e690d4f319dfa787d773086e1c0e5782b4c\"" Jan 20 07:03:00.826223 containerd[1604]: time="2026-01-20T07:03:00.826158530Z" level=info msg="connecting to shim c2d7a47da83d308d96a4dc8130f10e690d4f319dfa787d773086e1c0e5782b4c" address="unix:///run/containerd/s/cc6a8eba777e090d0d8154a5ad72c3ea6fed48d0dc4d4df28cc91be7cbb88362" protocol=ttrpc version=3 Jan 20 07:03:00.840000 audit: BPF prog-id=191 op=LOAD Jan 20 07:03:00.844333 kernel: kauditd_printk_skb: 61 callbacks suppressed Jan 20 07:03:00.844475 kernel: audit: type=1334 audit(1768892580.840:602): prog-id=191 op=LOAD Jan 20 07:03:00.847000 audit: BPF prog-id=192 op=LOAD Jan 20 07:03:00.847000 audit[4229]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4216 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.855783 kernel: audit: type=1334 audit(1768892580.847:603): prog-id=192 op=LOAD Jan 20 07:03:00.856109 kernel: audit: type=1300 audit(1768892580.847:603): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4216 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.864463 kernel: audit: type=1327 audit(1768892580.847:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161333939326362376364613435613263663337656434663239636536 Jan 20 07:03:00.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161333939326362376364613435613263663337656434663239636536 Jan 20 07:03:00.847000 audit: BPF prog-id=192 op=UNLOAD Jan 20 07:03:00.894316 kernel: audit: type=1334 audit(1768892580.847:604): prog-id=192 op=UNLOAD Jan 20 07:03:00.894507 kernel: audit: type=1300 audit(1768892580.847:604): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4216 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.847000 audit[4229]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4216 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.895253 systemd[1]: Started cri-containerd-c2d7a47da83d308d96a4dc8130f10e690d4f319dfa787d773086e1c0e5782b4c.scope - libcontainer container c2d7a47da83d308d96a4dc8130f10e690d4f319dfa787d773086e1c0e5782b4c. Jan 20 07:03:00.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161333939326362376364613435613263663337656434663239636536 Jan 20 07:03:00.921134 kernel: audit: type=1327 audit(1768892580.847:604): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161333939326362376364613435613263663337656434663239636536 Jan 20 07:03:00.921241 kernel: audit: type=1334 audit(1768892580.847:605): prog-id=193 op=LOAD Jan 20 07:03:00.847000 audit: BPF prog-id=193 op=LOAD Jan 20 07:03:00.847000 audit[4229]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4216 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.930960 kernel: audit: type=1300 audit(1768892580.847:605): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4216 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161333939326362376364613435613263663337656434663239636536 Jan 20 07:03:00.942066 kernel: audit: type=1327 audit(1768892580.847:605): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161333939326362376364613435613263663337656434663239636536 Jan 20 07:03:00.847000 audit: BPF prog-id=194 op=LOAD Jan 20 07:03:00.847000 audit[4229]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4216 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161333939326362376364613435613263663337656434663239636536 Jan 20 07:03:00.847000 audit: BPF prog-id=194 op=UNLOAD Jan 20 07:03:00.847000 audit[4229]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4216 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161333939326362376364613435613263663337656434663239636536 Jan 20 07:03:00.847000 audit: BPF prog-id=193 op=UNLOAD Jan 20 07:03:00.847000 audit[4229]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4216 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161333939326362376364613435613263663337656434663239636536 Jan 20 07:03:00.847000 audit: BPF prog-id=195 op=LOAD Jan 20 07:03:00.847000 audit[4229]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4216 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161333939326362376364613435613263663337656434663239636536 Jan 20 07:03:00.940000 audit: BPF prog-id=196 op=LOAD Jan 20 07:03:00.947000 audit: BPF prog-id=197 op=LOAD Jan 20 07:03:00.947000 audit[4254]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4173 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643761343764613833643330386439366134646338313330663130 Jan 20 07:03:00.947000 audit: BPF prog-id=197 op=UNLOAD Jan 20 07:03:00.947000 audit[4254]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4173 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643761343764613833643330386439366134646338313330663130 Jan 20 07:03:00.948000 audit: BPF prog-id=198 op=LOAD Jan 20 07:03:00.948000 audit[4254]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4173 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643761343764613833643330386439366134646338313330663130 Jan 20 07:03:00.948000 audit: BPF prog-id=199 op=LOAD Jan 20 07:03:00.948000 audit[4254]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4173 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643761343764613833643330386439366134646338313330663130 Jan 20 07:03:00.948000 audit: BPF prog-id=199 op=UNLOAD Jan 20 07:03:00.948000 audit[4254]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4173 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643761343764613833643330386439366134646338313330663130 Jan 20 07:03:00.949000 audit: BPF prog-id=198 op=UNLOAD Jan 20 07:03:00.949000 audit[4254]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4173 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643761343764613833643330386439366134646338313330663130 Jan 20 07:03:00.949000 audit: BPF prog-id=200 op=LOAD Jan 20 07:03:00.949000 audit[4254]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4173 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:00.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643761343764613833643330386439366134646338313330663130 Jan 20 07:03:01.032530 containerd[1604]: time="2026-01-20T07:03:01.032496136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8krch,Uid:930ba9b4-4a35-4f62-858d-858957a6d7e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a3992cb7cda45a2cf37ed4f29ce675d79ec7eb9541012b33acdf423102f19d0\"" Jan 20 07:03:01.037133 containerd[1604]: time="2026-01-20T07:03:01.037105220Z" level=info msg="StartContainer for \"c2d7a47da83d308d96a4dc8130f10e690d4f319dfa787d773086e1c0e5782b4c\" returns successfully" Jan 20 07:03:01.038635 containerd[1604]: time="2026-01-20T07:03:01.038296031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 07:03:01.232795 containerd[1604]: time="2026-01-20T07:03:01.232541774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:01.235308 containerd[1604]: time="2026-01-20T07:03:01.235246418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 07:03:01.235390 containerd[1604]: time="2026-01-20T07:03:01.235360148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:01.235629 kubelet[2867]: E0120 07:03:01.235573 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 07:03:01.238277 kubelet[2867]: E0120 07:03:01.236104 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 07:03:01.238277 kubelet[2867]: E0120 07:03:01.236545 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bljpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8krch_calico-system(930ba9b4-4a35-4f62-858d-858957a6d7e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:01.238277 kubelet[2867]: E0120 07:03:01.238207 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8" Jan 20 07:03:01.352000 audit: BPF prog-id=201 op=LOAD Jan 20 07:03:01.352000 audit[4328]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8d94cea0 a2=98 a3=1fffffffffffffff items=0 ppid=4279 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.352000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 07:03:01.352000 audit: BPF prog-id=201 op=UNLOAD Jan 20 07:03:01.352000 audit[4328]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff8d94ce70 a3=0 items=0 ppid=4279 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.352000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 07:03:01.352000 audit: BPF prog-id=202 op=LOAD Jan 20 07:03:01.352000 audit[4328]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8d94cd80 a2=94 a3=3 items=0 ppid=4279 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.352000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 07:03:01.352000 audit: BPF prog-id=202 op=UNLOAD Jan 20 07:03:01.352000 audit[4328]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff8d94cd80 a2=94 a3=3 items=0 ppid=4279 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.352000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 07:03:01.352000 audit: BPF prog-id=203 op=LOAD Jan 20 07:03:01.352000 audit[4328]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8d94cdc0 a2=94 a3=7fff8d94cfa0 items=0 ppid=4279 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.352000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 07:03:01.352000 audit: BPF prog-id=203 op=UNLOAD Jan 20 07:03:01.352000 audit[4328]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff8d94cdc0 a2=94 a3=7fff8d94cfa0 items=0 ppid=4279 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.352000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 07:03:01.354000 audit: BPF prog-id=204 op=LOAD Jan 20 07:03:01.354000 audit[4329]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc0fde9a30 a2=98 a3=3 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.354000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.354000 audit: BPF prog-id=204 op=UNLOAD Jan 20 07:03:01.354000 audit[4329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc0fde9a00 a3=0 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.354000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.354000 audit: BPF prog-id=205 op=LOAD Jan 20 07:03:01.354000 audit[4329]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0fde9820 a2=94 a3=54428f items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.354000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.354000 audit: BPF prog-id=205 op=UNLOAD Jan 20 07:03:01.354000 audit[4329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc0fde9820 a2=94 a3=54428f items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.354000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.354000 audit: BPF prog-id=206 op=LOAD Jan 20 07:03:01.354000 audit[4329]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0fde9850 a2=94 a3=2 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.354000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.354000 audit: BPF prog-id=206 op=UNLOAD Jan 20 07:03:01.354000 audit[4329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc0fde9850 a2=0 a3=2 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.354000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.705298 kubelet[2867]: E0120 07:03:01.703437 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8" Jan 20 07:03:01.709677 kubelet[2867]: E0120 07:03:01.709636 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:01.738000 audit: BPF prog-id=207 op=LOAD Jan 20 07:03:01.738000 audit[4329]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0fde9710 a2=94 a3=1 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.739000 audit: BPF prog-id=207 op=UNLOAD Jan 20 07:03:01.739000 audit[4329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc0fde9710 a2=94 a3=1 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.739000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.774000 audit: BPF prog-id=208 op=LOAD Jan 20 07:03:01.774000 audit[4329]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc0fde9700 a2=94 a3=4 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.774000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.775000 audit: BPF prog-id=208 op=UNLOAD Jan 20 07:03:01.775000 audit[4329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc0fde9700 a2=0 a3=4 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.775000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.776000 audit: BPF prog-id=209 op=LOAD Jan 20 07:03:01.776000 audit[4329]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc0fde9560 a2=94 a3=5 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.776000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.777000 audit: BPF prog-id=209 op=UNLOAD Jan 20 07:03:01.777000 audit[4329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc0fde9560 a2=0 a3=5 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.777000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.777000 audit: BPF prog-id=210 op=LOAD Jan 20 07:03:01.777000 audit[4329]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc0fde9780 a2=94 a3=6 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.777000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.778000 audit: BPF prog-id=210 op=UNLOAD Jan 20 07:03:01.778000 audit[4329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc0fde9780 a2=0 a3=6 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.778000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.778000 audit[4353]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:01.778000 audit[4353]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc9f6390d0 a2=0 a3=7ffc9f6390bc items=0 ppid=3020 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:01.779000 audit: BPF prog-id=211 op=LOAD Jan 20 07:03:01.779000 audit[4329]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc0fde8f30 a2=94 a3=88 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.781000 audit: BPF prog-id=212 op=LOAD Jan 20 07:03:01.781000 audit[4329]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc0fde8db0 a2=94 a3=2 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.781000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.781000 audit: BPF prog-id=212 op=UNLOAD Jan 20 07:03:01.781000 audit[4329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc0fde8de0 a2=0 a3=7ffc0fde8ee0 items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.781000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.783000 audit[4353]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:01.784000 audit: BPF prog-id=211 op=UNLOAD Jan 20 07:03:01.784000 audit[4329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=fae1d10 a2=0 a3=ba78ec852299aecc items=0 ppid=4279 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.784000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 07:03:01.783000 audit[4353]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc9f6390d0 a2=0 a3=0 items=0 ppid=3020 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.783000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:01.807000 audit: BPF prog-id=213 op=LOAD Jan 20 07:03:01.807000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef9238750 a2=98 a3=1999999999999999 items=0 ppid=4279 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.807000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 07:03:01.807000 audit: BPF prog-id=213 op=UNLOAD Jan 20 07:03:01.807000 audit[4357]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffef9238720 a3=0 items=0 ppid=4279 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.807000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 07:03:01.807000 audit: BPF prog-id=214 op=LOAD Jan 20 07:03:01.807000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef9238630 a2=94 a3=ffff items=0 ppid=4279 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.807000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 07:03:01.808000 audit: BPF prog-id=214 op=UNLOAD Jan 20 07:03:01.808000 audit[4357]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffef9238630 a2=94 a3=ffff items=0 ppid=4279 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.808000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 07:03:01.808000 audit: BPF prog-id=215 op=LOAD Jan 20 07:03:01.808000 audit[4357]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef9238670 a2=94 a3=7ffef9238850 items=0 ppid=4279 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.808000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 07:03:01.808000 audit: BPF prog-id=215 op=UNLOAD Jan 20 07:03:01.808000 audit[4357]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffef9238670 a2=94 a3=7ffef9238850 items=0 ppid=4279 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.808000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 07:03:01.829000 audit[4364]: NETFILTER_CFG table=filter:123 family=2 entries=17 op=nft_register_rule pid=4364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:01.829000 audit[4364]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff80d28cb0 a2=0 a3=7fff80d28c9c items=0 ppid=3020 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:01.840000 audit[4364]: NETFILTER_CFG table=nat:124 family=2 entries=35 op=nft_register_chain pid=4364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:01.840000 audit[4364]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff80d28cb0 a2=0 a3=7fff80d28c9c items=0 ppid=3020 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:01.927600 systemd-networkd[1498]: vxlan.calico: Link UP Jan 20 07:03:01.927612 systemd-networkd[1498]: vxlan.calico: Gained carrier Jan 20 07:03:01.973000 audit: BPF prog-id=216 op=LOAD Jan 20 07:03:01.973000 audit[4383]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff77787560 a2=98 a3=0 items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.973000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:01.975000 audit: BPF prog-id=216 op=UNLOAD Jan 20 07:03:01.975000 audit[4383]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff77787530 a3=0 items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.975000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:01.975000 audit: BPF prog-id=217 op=LOAD Jan 20 07:03:01.975000 audit[4383]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff77787370 a2=94 a3=54428f items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.975000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:01.975000 audit: BPF prog-id=217 op=UNLOAD Jan 20 07:03:01.975000 audit[4383]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff77787370 a2=94 a3=54428f items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.975000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:01.975000 audit: BPF prog-id=218 op=LOAD Jan 20 07:03:01.975000 audit[4383]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff777873a0 a2=94 a3=2 items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.975000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:01.975000 audit: BPF prog-id=218 op=UNLOAD Jan 20 07:03:01.975000 audit[4383]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff777873a0 a2=0 a3=2 items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.975000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:01.975000 audit: BPF prog-id=219 op=LOAD Jan 20 07:03:01.975000 audit[4383]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff77787150 a2=94 a3=4 items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.975000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:01.975000 audit: BPF prog-id=219 op=UNLOAD Jan 20 07:03:01.975000 audit[4383]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff77787150 a2=94 a3=4 items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.975000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:01.975000 audit: BPF prog-id=220 op=LOAD Jan 20 07:03:01.975000 audit[4383]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff77787250 a2=94 a3=7fff777873d0 items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.975000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:01.975000 audit: BPF prog-id=220 op=UNLOAD Jan 20 07:03:01.975000 audit[4383]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff77787250 a2=0 a3=7fff777873d0 items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.975000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:01.985000 audit: BPF prog-id=221 op=LOAD Jan 20 07:03:01.985000 audit[4383]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff77786980 a2=94 a3=2 items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.985000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:01.986000 audit: BPF prog-id=221 op=UNLOAD Jan 20 07:03:01.986000 audit[4383]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff77786980 a2=0 a3=2 items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:01.986000 audit: BPF prog-id=222 op=LOAD Jan 20 07:03:01.986000 audit[4383]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff77786a80 a2=94 a3=30 items=0 ppid=4279 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:01.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 07:03:02.000000 audit: BPF prog-id=223 op=LOAD Jan 20 07:03:02.000000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc457f0470 a2=98 a3=0 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.000000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.001000 audit: BPF prog-id=223 op=UNLOAD Jan 20 07:03:02.001000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc457f0440 a3=0 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.001000 audit: BPF prog-id=224 op=LOAD Jan 20 07:03:02.001000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc457f0260 a2=94 a3=54428f items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.001000 audit: BPF prog-id=224 op=UNLOAD Jan 20 07:03:02.001000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc457f0260 a2=94 a3=54428f items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.001000 audit: BPF prog-id=225 op=LOAD Jan 20 07:03:02.001000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc457f0290 a2=94 a3=2 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.001000 audit: BPF prog-id=225 op=UNLOAD Jan 20 07:03:02.001000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc457f0290 a2=0 a3=2 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.227000 audit: BPF prog-id=226 op=LOAD Jan 20 07:03:02.227000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc457f0150 a2=94 a3=1 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.227000 audit: BPF prog-id=226 op=UNLOAD Jan 20 07:03:02.227000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc457f0150 a2=94 a3=1 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.237000 audit: BPF prog-id=227 op=LOAD Jan 20 07:03:02.237000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc457f0140 a2=94 a3=4 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.237000 audit: BPF prog-id=227 op=UNLOAD Jan 20 07:03:02.237000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc457f0140 a2=0 a3=4 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.237000 audit: BPF prog-id=228 op=LOAD Jan 20 07:03:02.237000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc457effa0 a2=94 a3=5 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.238000 audit: BPF prog-id=228 op=UNLOAD Jan 20 07:03:02.238000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc457effa0 a2=0 a3=5 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.238000 audit: BPF prog-id=229 op=LOAD Jan 20 07:03:02.238000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc457f01c0 a2=94 a3=6 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.238000 audit: BPF prog-id=229 op=UNLOAD Jan 20 07:03:02.238000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc457f01c0 a2=0 a3=6 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.238000 audit: BPF prog-id=230 op=LOAD Jan 20 07:03:02.238000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc457ef970 a2=94 a3=88 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.238000 audit: BPF prog-id=231 op=LOAD Jan 20 07:03:02.238000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc457ef7f0 a2=94 a3=2 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.238000 audit: BPF prog-id=231 op=UNLOAD Jan 20 07:03:02.238000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc457ef820 a2=0 a3=7ffc457ef920 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.238000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.239000 audit: BPF prog-id=230 op=UNLOAD Jan 20 07:03:02.239000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=8902d10 a2=0 a3=50d6ef3c36287d81 items=0 ppid=4279 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.239000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 07:03:02.246531 containerd[1604]: time="2026-01-20T07:03:02.246424208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c495f47-n5kdv,Uid:18f52096-e17f-46d5-a51d-1ae5ca49fd14,Namespace:calico-apiserver,Attempt:0,}" Jan 20 07:03:02.245000 audit: BPF prog-id=222 op=UNLOAD Jan 20 07:03:02.245000 audit[4279]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000d7f6c0 a2=0 a3=0 items=0 ppid=3999 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.245000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 07:03:02.249140 kubelet[2867]: E0120 07:03:02.249080 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:02.249730 containerd[1604]: time="2026-01-20T07:03:02.249369002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w6jwn,Uid:dd9207e8-fe1e-43a2-ab22-bb4ac860e560,Namespace:calico-system,Attempt:0,}" Jan 20 07:03:02.250318 containerd[1604]: time="2026-01-20T07:03:02.250055352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8vm6n,Uid:e65373ef-4973-4aa5-9425-ede746ebd364,Namespace:kube-system,Attempt:0,}" Jan 20 07:03:02.371384 systemd-networkd[1498]: cali7c2a4982ef6: Gained IPv6LL Jan 20 07:03:02.521000 audit[4460]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=4460 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 07:03:02.521000 audit[4460]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd23f604b0 a2=0 a3=7ffd23f6049c items=0 ppid=4279 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.521000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 07:03:02.532000 audit[4464]: NETFILTER_CFG table=nat:126 family=2 entries=15 op=nft_register_chain pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 07:03:02.532000 audit[4464]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe10155740 a2=0 a3=7ffe1015572c items=0 ppid=4279 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.532000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 07:03:02.554000 audit[4459]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=4459 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 07:03:02.554000 audit[4459]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff155d9f30 a2=0 a3=7fff155d9f1c items=0 ppid=4279 pid=4459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.554000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 07:03:02.563755 systemd-networkd[1498]: cali97252e80624: Gained IPv6LL Jan 20 07:03:02.585000 audit[4474]: NETFILTER_CFG table=filter:128 family=2 entries=164 op=nft_register_chain pid=4474 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 07:03:02.585000 audit[4474]: SYSCALL arch=c000003e syscall=46 success=yes exit=95048 a0=3 a1=7ffdf5795840 a2=0 a3=7ffdf579582c items=0 ppid=4279 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.585000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 07:03:02.661091 systemd-networkd[1498]: calib5a6679f433: Link UP Jan 20 07:03:02.663217 systemd-networkd[1498]: calib5a6679f433: Gained carrier Jan 20 07:03:02.679680 kubelet[2867]: I0120 07:03:02.678559 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2dszx" podStartSLOduration=50.678511476 podStartE2EDuration="50.678511476s" podCreationTimestamp="2026-01-20 07:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 07:03:01.744663002 +0000 UTC m=+54.775336537" watchObservedRunningTime="2026-01-20 07:03:02.678511476 +0000 UTC m=+55.709185021" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.465 [INFO][4408] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--121-k8s-csi--node--driver--w6jwn-eth0 csi-node-driver- calico-system dd9207e8-fe1e-43a2-ab22-bb4ac860e560 796 0 2026-01-20 07:02:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-232-7-121 csi-node-driver-w6jwn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib5a6679f433 [] [] }} ContainerID="2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" Namespace="calico-system" Pod="csi-node-driver-w6jwn" WorkloadEndpoint="172--232--7--121-k8s-csi--node--driver--w6jwn-" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.465 [INFO][4408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" Namespace="calico-system" Pod="csi-node-driver-w6jwn" WorkloadEndpoint="172--232--7--121-k8s-csi--node--driver--w6jwn-eth0" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.582 [INFO][4456] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" HandleID="k8s-pod-network.2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" Workload="172--232--7--121-k8s-csi--node--driver--w6jwn-eth0" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.583 [INFO][4456] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" HandleID="k8s-pod-network.2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" Workload="172--232--7--121-k8s-csi--node--driver--w6jwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ab790), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-7-121", "pod":"csi-node-driver-w6jwn", "timestamp":"2026-01-20 07:03:02.58230086 +0000 UTC"}, Hostname:"172-232-7-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.583 [INFO][4456] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.583 [INFO][4456] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.584 [INFO][4456] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-121' Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.607 [INFO][4456] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" host="172-232-7-121" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.618 [INFO][4456] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-121" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.626 [INFO][4456] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.630 [INFO][4456] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.634 [INFO][4456] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.634 [INFO][4456] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" host="172-232-7-121" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.636 [INFO][4456] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.641 [INFO][4456] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" host="172-232-7-121" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.649 [INFO][4456] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.132/26] block=192.168.82.128/26 handle="k8s-pod-network.2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" host="172-232-7-121" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.649 [INFO][4456] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.132/26] handle="k8s-pod-network.2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" host="172-232-7-121" Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.649 [INFO][4456] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 07:03:02.684024 containerd[1604]: 2026-01-20 07:03:02.651 [INFO][4456] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.132/26] IPv6=[] ContainerID="2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" HandleID="k8s-pod-network.2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" Workload="172--232--7--121-k8s-csi--node--driver--w6jwn-eth0" Jan 20 07:03:02.685802 containerd[1604]: 2026-01-20 07:03:02.653 [INFO][4408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" Namespace="calico-system" Pod="csi-node-driver-w6jwn" WorkloadEndpoint="172--232--7--121-k8s-csi--node--driver--w6jwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-csi--node--driver--w6jwn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dd9207e8-fe1e-43a2-ab22-bb4ac860e560", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"", Pod:"csi-node-driver-w6jwn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5a6679f433", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:02.685802 containerd[1604]: 2026-01-20 07:03:02.653 [INFO][4408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.132/32] ContainerID="2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" Namespace="calico-system" Pod="csi-node-driver-w6jwn" WorkloadEndpoint="172--232--7--121-k8s-csi--node--driver--w6jwn-eth0" Jan 20 07:03:02.685802 containerd[1604]: 2026-01-20 07:03:02.653 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5a6679f433 ContainerID="2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" Namespace="calico-system" Pod="csi-node-driver-w6jwn" WorkloadEndpoint="172--232--7--121-k8s-csi--node--driver--w6jwn-eth0" Jan 20 07:03:02.685802 containerd[1604]: 2026-01-20 07:03:02.664 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" Namespace="calico-system" Pod="csi-node-driver-w6jwn" WorkloadEndpoint="172--232--7--121-k8s-csi--node--driver--w6jwn-eth0" Jan 20 07:03:02.685802 containerd[1604]: 2026-01-20 07:03:02.664 [INFO][4408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" Namespace="calico-system" Pod="csi-node-driver-w6jwn" WorkloadEndpoint="172--232--7--121-k8s-csi--node--driver--w6jwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-csi--node--driver--w6jwn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dd9207e8-fe1e-43a2-ab22-bb4ac860e560", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d", Pod:"csi-node-driver-w6jwn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5a6679f433", MAC:"ee:42:d2:88:84:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:02.685802 containerd[1604]: 2026-01-20 07:03:02.681 [INFO][4408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" Namespace="calico-system" Pod="csi-node-driver-w6jwn" WorkloadEndpoint="172--232--7--121-k8s-csi--node--driver--w6jwn-eth0" Jan 20 07:03:02.721219 kubelet[2867]: E0120 07:03:02.721139 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:02.725537 kubelet[2867]: E0120 07:03:02.725500 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8" Jan 20 07:03:02.728999 containerd[1604]: time="2026-01-20T07:03:02.726977320Z" level=info msg="connecting to shim 2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d" address="unix:///run/containerd/s/381a7518acc4ce9902be574093d4442638191b0ec5d3aeab1c89d2babfcc6b35" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:03:02.794606 systemd[1]: Started cri-containerd-2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d.scope - libcontainer container 2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d. Jan 20 07:03:02.809000 audit[4528]: NETFILTER_CFG table=filter:129 family=2 entries=44 op=nft_register_chain pid=4528 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 07:03:02.809000 audit[4528]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7fff32022990 a2=0 a3=7fff3202297c items=0 ppid=4279 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.809000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 07:03:02.841390 systemd-networkd[1498]: cali1cd0bb67ee7: Link UP Jan 20 07:03:02.849493 systemd-networkd[1498]: cali1cd0bb67ee7: Gained carrier Jan 20 07:03:02.868000 audit: BPF prog-id=232 op=LOAD Jan 20 07:03:02.869000 audit: BPF prog-id=233 op=LOAD Jan 20 07:03:02.869000 audit[4516]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4504 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613938386635343831326365346434343534626265383037393234 Jan 20 07:03:02.870000 audit: BPF prog-id=233 op=UNLOAD Jan 20 07:03:02.870000 audit[4516]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4504 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613938386635343831326365346434343534626265383037393234 Jan 20 07:03:02.871000 audit: BPF prog-id=234 op=LOAD Jan 20 07:03:02.871000 audit[4516]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4504 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613938386635343831326365346434343534626265383037393234 Jan 20 07:03:02.872000 audit: BPF prog-id=235 op=LOAD Jan 20 07:03:02.872000 audit[4516]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4504 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613938386635343831326365346434343534626265383037393234 Jan 20 07:03:02.872000 audit: BPF prog-id=235 op=UNLOAD Jan 20 07:03:02.872000 audit[4516]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4504 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613938386635343831326365346434343534626265383037393234 Jan 20 07:03:02.873000 audit: BPF prog-id=234 op=UNLOAD Jan 20 07:03:02.873000 audit[4516]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4504 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613938386635343831326365346434343534626265383037393234 Jan 20 07:03:02.873000 audit: BPF prog-id=236 op=LOAD Jan 20 07:03:02.873000 audit[4516]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4504 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:02.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613938386635343831326365346434343534626265383037393234 Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.424 [INFO][4392] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0 calico-apiserver-6c495f47- calico-apiserver 18f52096-e17f-46d5-a51d-1ae5ca49fd14 913 0 2026-01-20 07:02:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c495f47 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-7-121 calico-apiserver-6c495f47-n5kdv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1cd0bb67ee7 [] [] }} ContainerID="28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-n5kdv" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-" Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.427 [INFO][4392] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-n5kdv" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0" Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.614 [INFO][4448] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" HandleID="k8s-pod-network.28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" Workload="172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0" Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.615 [INFO][4448] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" HandleID="k8s-pod-network.28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" Workload="172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000345250), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-7-121", "pod":"calico-apiserver-6c495f47-n5kdv", "timestamp":"2026-01-20 07:03:02.614896686 +0000 UTC"}, Hostname:"172-232-7-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.615 [INFO][4448] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.649 [INFO][4448] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.649 [INFO][4448] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-121' Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.707 [INFO][4448] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" host="172-232-7-121" Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.773 [INFO][4448] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-121" Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.789 [INFO][4448] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.792 [INFO][4448] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.797 [INFO][4448] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.797 [INFO][4448] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" host="172-232-7-121" Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.803 [INFO][4448] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.808 [INFO][4448] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" host="172-232-7-121" Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.816 [INFO][4448] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.133/26] block=192.168.82.128/26 handle="k8s-pod-network.28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" host="172-232-7-121" Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.816 [INFO][4448] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.133/26] handle="k8s-pod-network.28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" host="172-232-7-121" Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.816 [INFO][4448] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 07:03:02.884441 containerd[1604]: 2026-01-20 07:03:02.816 [INFO][4448] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.133/26] IPv6=[] ContainerID="28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" HandleID="k8s-pod-network.28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" Workload="172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0" Jan 20 07:03:02.885628 containerd[1604]: 2026-01-20 07:03:02.823 [INFO][4392] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-n5kdv" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0", GenerateName:"calico-apiserver-6c495f47-", Namespace:"calico-apiserver", SelfLink:"", UID:"18f52096-e17f-46d5-a51d-1ae5ca49fd14", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c495f47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"", Pod:"calico-apiserver-6c495f47-n5kdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cd0bb67ee7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:02.885628 containerd[1604]: 2026-01-20 07:03:02.826 [INFO][4392] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.133/32] ContainerID="28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-n5kdv" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0" Jan 20 07:03:02.885628 containerd[1604]: 2026-01-20 07:03:02.826 [INFO][4392] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cd0bb67ee7 ContainerID="28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-n5kdv" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0" Jan 20 07:03:02.885628 containerd[1604]: 2026-01-20 07:03:02.851 [INFO][4392] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-n5kdv" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0" Jan 20 07:03:02.885628 containerd[1604]: 2026-01-20 07:03:02.852 [INFO][4392] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-n5kdv" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0", GenerateName:"calico-apiserver-6c495f47-", Namespace:"calico-apiserver", SelfLink:"", UID:"18f52096-e17f-46d5-a51d-1ae5ca49fd14", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c495f47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b", Pod:"calico-apiserver-6c495f47-n5kdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cd0bb67ee7", MAC:"1e:ad:b6:02:97:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:02.885628 containerd[1604]: 2026-01-20 07:03:02.878 [INFO][4392] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-n5kdv" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--n5kdv-eth0" Jan 20 07:03:02.995939 systemd-networkd[1498]: cali7f285efff4b: Link UP Jan 20 07:03:02.996784 systemd-networkd[1498]: cali7f285efff4b: Gained carrier Jan 20 07:03:03.009319 containerd[1604]: time="2026-01-20T07:03:03.009170282Z" level=info msg="connecting to shim 28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b" address="unix:///run/containerd/s/1252d8bdfd0f67eab9a188e759859a2a9301ad29104aaff88e1c8e7ca928cb28" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:03:03.011417 systemd-networkd[1498]: vxlan.calico: Gained IPv6LL Jan 20 07:03:03.018261 containerd[1604]: time="2026-01-20T07:03:03.017542411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w6jwn,Uid:dd9207e8-fe1e-43a2-ab22-bb4ac860e560,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ea988f54812ce4d4454bbe807924c2080a2ee98670191f9d84ac348e0aa878d\"" Jan 20 07:03:03.027964 containerd[1604]: time="2026-01-20T07:03:03.027742311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.483 [INFO][4411] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0 coredns-674b8bbfcf- kube-system e65373ef-4973-4aa5-9425-ede746ebd364 915 0 2026-01-20 07:02:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-7-121 coredns-674b8bbfcf-8vm6n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7f285efff4b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vm6n" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.483 [INFO][4411] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vm6n" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.618 [INFO][4467] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" HandleID="k8s-pod-network.cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" Workload="172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.618 [INFO][4467] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" HandleID="k8s-pod-network.cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" Workload="172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000327940), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-7-121", "pod":"coredns-674b8bbfcf-8vm6n", "timestamp":"2026-01-20 07:03:02.61851732 +0000 UTC"}, Hostname:"172-232-7-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.618 [INFO][4467] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.816 [INFO][4467] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.816 [INFO][4467] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-121' Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.834 [INFO][4467] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" host="172-232-7-121" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.881 [INFO][4467] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-121" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.891 [INFO][4467] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.900 [INFO][4467] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.907 [INFO][4467] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.908 [INFO][4467] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" host="172-232-7-121" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.916 [INFO][4467] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1 Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.927 [INFO][4467] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" host="172-232-7-121" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.959 [INFO][4467] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.134/26] block=192.168.82.128/26 handle="k8s-pod-network.cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" host="172-232-7-121" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.959 [INFO][4467] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.134/26] handle="k8s-pod-network.cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" host="172-232-7-121" Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.960 [INFO][4467] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 07:03:03.047485 containerd[1604]: 2026-01-20 07:03:02.960 [INFO][4467] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.134/26] IPv6=[] ContainerID="cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" HandleID="k8s-pod-network.cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" Workload="172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0" Jan 20 07:03:03.048270 containerd[1604]: 2026-01-20 07:03:02.977 [INFO][4411] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vm6n" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e65373ef-4973-4aa5-9425-ede746ebd364", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"", Pod:"coredns-674b8bbfcf-8vm6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f285efff4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:03.048270 containerd[1604]: 2026-01-20 07:03:02.977 [INFO][4411] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.134/32] ContainerID="cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vm6n" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0" Jan 20 07:03:03.048270 containerd[1604]: 2026-01-20 07:03:02.977 [INFO][4411] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f285efff4b ContainerID="cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vm6n" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0" Jan 20 07:03:03.048270 containerd[1604]: 2026-01-20 07:03:03.000 [INFO][4411] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vm6n" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0" Jan 20 07:03:03.048270 containerd[1604]: 2026-01-20 07:03:03.002 [INFO][4411] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vm6n" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e65373ef-4973-4aa5-9425-ede746ebd364", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1", Pod:"coredns-674b8bbfcf-8vm6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f285efff4b", MAC:"b6:1a:9a:4c:d5:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:03.048270 containerd[1604]: 2026-01-20 07:03:03.034 [INFO][4411] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-8vm6n" WorkloadEndpoint="172--232--7--121-k8s-coredns--674b8bbfcf--8vm6n-eth0" Jan 20 07:03:03.067423 systemd[1]: Started cri-containerd-28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b.scope - libcontainer container 28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b. Jan 20 07:03:03.107465 containerd[1604]: time="2026-01-20T07:03:03.107383097Z" level=info msg="connecting to shim cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1" address="unix:///run/containerd/s/7503d920c6e94ef979f771c0c1ec2e6cb119720d35ccfdd503f08bbf3e352b70" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:03:03.143000 audit[4578]: NETFILTER_CFG table=filter:130 family=2 entries=62 op=nft_register_chain pid=4578 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 07:03:03.143000 audit[4578]: SYSCALL arch=c000003e syscall=46 success=yes exit=31772 a0=3 a1=7ffdc59d4340 a2=0 a3=7ffdc59d432c items=0 ppid=4279 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.143000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 07:03:03.180089 containerd[1604]: time="2026-01-20T07:03:03.180021185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:03.188855 containerd[1604]: time="2026-01-20T07:03:03.188732794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 07:03:03.189892 containerd[1604]: time="2026-01-20T07:03:03.189301604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:03.190259 kubelet[2867]: E0120 07:03:03.190208 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 07:03:03.191313 kubelet[2867]: E0120 07:03:03.191156 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 07:03:03.194174 kubelet[2867]: E0120 07:03:03.194067 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb7xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6jwn_calico-system(dd9207e8-fe1e-43a2-ab22-bb4ac860e560): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:03.196833 containerd[1604]: time="2026-01-20T07:03:03.196752233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 07:03:03.213698 systemd[1]: Started cri-containerd-cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1.scope - libcontainer container cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1. Jan 20 07:03:03.219000 audit: BPF prog-id=237 op=LOAD Jan 20 07:03:03.224000 audit: BPF prog-id=238 op=LOAD Jan 20 07:03:03.224000 audit[4572]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4559 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383136633561333930653538633065666634623833666538343636 Jan 20 07:03:03.225000 audit: BPF prog-id=238 op=UNLOAD Jan 20 07:03:03.225000 audit[4572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383136633561333930653538633065666634623833666538343636 Jan 20 07:03:03.225000 audit: BPF prog-id=239 op=LOAD Jan 20 07:03:03.225000 audit[4572]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4559 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383136633561333930653538633065666634623833666538343636 Jan 20 07:03:03.225000 audit: BPF prog-id=240 op=LOAD Jan 20 07:03:03.225000 audit[4572]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4559 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383136633561333930653538633065666634623833666538343636 Jan 20 07:03:03.226000 audit: BPF prog-id=240 op=UNLOAD Jan 20 07:03:03.226000 audit[4572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383136633561333930653538633065666634623833666538343636 Jan 20 07:03:03.228000 audit: BPF prog-id=239 op=UNLOAD Jan 20 07:03:03.228000 audit[4572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4559 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383136633561333930653538633065666634623833666538343636 Jan 20 07:03:03.228000 audit: BPF prog-id=241 op=LOAD Jan 20 07:03:03.228000 audit[4572]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4559 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238383136633561333930653538633065666634623833666538343636 Jan 20 07:03:03.251251 containerd[1604]: time="2026-01-20T07:03:03.250943070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c495f47-pkkt5,Uid:add8a880-515a-44f3-9fed-8077d26ba5b6,Namespace:calico-apiserver,Attempt:0,}" Jan 20 07:03:03.254000 audit: BPF prog-id=242 op=LOAD Jan 20 07:03:03.255000 audit: BPF prog-id=243 op=LOAD Jan 20 07:03:03.255000 audit[4618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa238 a2=98 a3=0 items=0 ppid=4599 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363333330303636326666653536663539623966303763656361393266 Jan 20 07:03:03.256000 audit: BPF prog-id=243 op=UNLOAD Jan 20 07:03:03.256000 audit[4618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4599 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363333330303636326666653536663539623966303763656361393266 Jan 20 07:03:03.256000 audit: BPF prog-id=244 op=LOAD Jan 20 07:03:03.256000 audit[4618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa488 a2=98 a3=0 items=0 ppid=4599 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363333330303636326666653536663539623966303763656361393266 Jan 20 07:03:03.257000 audit: BPF prog-id=245 op=LOAD Jan 20 07:03:03.257000 audit[4618]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001fa218 a2=98 a3=0 items=0 ppid=4599 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363333330303636326666653536663539623966303763656361393266 Jan 20 07:03:03.257000 audit: BPF prog-id=245 op=UNLOAD Jan 20 07:03:03.257000 audit[4618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4599 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363333330303636326666653536663539623966303763656361393266 Jan 20 07:03:03.257000 audit: BPF prog-id=244 op=UNLOAD Jan 20 07:03:03.257000 audit[4618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4599 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363333330303636326666653536663539623966303763656361393266 Jan 20 07:03:03.257000 audit: BPF prog-id=246 op=LOAD Jan 20 07:03:03.257000 audit[4618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa6e8 a2=98 a3=0 items=0 ppid=4599 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363333330303636326666653536663539623966303763656361393266 Jan 20 07:03:03.261660 containerd[1604]: time="2026-01-20T07:03:03.261444901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c4c84c57-fbspp,Uid:48809565-7ef3-4c36-a2a9-e27dfb3fe63c,Namespace:calico-system,Attempt:0,}" Jan 20 07:03:03.293000 audit[4649]: NETFILTER_CFG table=filter:131 family=2 entries=54 op=nft_register_chain pid=4649 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 07:03:03.293000 audit[4649]: SYSCALL arch=c000003e syscall=46 success=yes exit=25572 a0=3 a1=7ffd6d6115a0 a2=0 a3=7ffd6d61158c items=0 ppid=4279 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.293000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 07:03:03.446574 containerd[1604]: time="2026-01-20T07:03:03.446157530Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:03.447262 containerd[1604]: time="2026-01-20T07:03:03.447122210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 07:03:03.447772 containerd[1604]: time="2026-01-20T07:03:03.447314690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:03.447852 kubelet[2867]: E0120 07:03:03.447512 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 07:03:03.447852 kubelet[2867]: E0120 07:03:03.447578 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 07:03:03.450370 kubelet[2867]: E0120 07:03:03.447720 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb7xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6jwn_calico-system(dd9207e8-fe1e-43a2-ab22-bb4ac860e560): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:03.450370 kubelet[2867]: E0120 07:03:03.449646 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:03:03.451582 containerd[1604]: time="2026-01-20T07:03:03.451541925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8vm6n,Uid:e65373ef-4973-4aa5-9425-ede746ebd364,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1\"" Jan 20 07:03:03.454484 kubelet[2867]: E0120 07:03:03.453282 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:03.462632 containerd[1604]: time="2026-01-20T07:03:03.462575727Z" level=info msg="CreateContainer within sandbox \"cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 07:03:03.486039 containerd[1604]: time="2026-01-20T07:03:03.485988191Z" level=info msg="Container 9764a7f1fe52a8bb042592acd6fbf4c201feff986b8d7de67095203e6d147053: CDI devices from CRI Config.CDIDevices: []" Jan 20 07:03:03.505057 containerd[1604]: time="2026-01-20T07:03:03.505018132Z" level=info msg="CreateContainer within sandbox \"cc3300662ffe56f59b9f07ceca92f5f18a4287805f5c03784c28243a031185e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9764a7f1fe52a8bb042592acd6fbf4c201feff986b8d7de67095203e6d147053\"" Jan 20 07:03:03.509141 containerd[1604]: time="2026-01-20T07:03:03.509113276Z" level=info msg="StartContainer for \"9764a7f1fe52a8bb042592acd6fbf4c201feff986b8d7de67095203e6d147053\"" Jan 20 07:03:03.527346 containerd[1604]: time="2026-01-20T07:03:03.527307116Z" level=info msg="connecting to shim 9764a7f1fe52a8bb042592acd6fbf4c201feff986b8d7de67095203e6d147053" address="unix:///run/containerd/s/7503d920c6e94ef979f771c0c1ec2e6cb119720d35ccfdd503f08bbf3e352b70" protocol=ttrpc version=3 Jan 20 07:03:03.545790 containerd[1604]: time="2026-01-20T07:03:03.545752485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c495f47-n5kdv,Uid:18f52096-e17f-46d5-a51d-1ae5ca49fd14,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"28816c5a390e58c0eff4b83fe8466fd2dba633ca4f14fd34f311625a7eea2f7b\"" Jan 20 07:03:03.550875 containerd[1604]: time="2026-01-20T07:03:03.550068990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 07:03:03.604438 systemd[1]: Started cri-containerd-9764a7f1fe52a8bb042592acd6fbf4c201feff986b8d7de67095203e6d147053.scope - libcontainer container 9764a7f1fe52a8bb042592acd6fbf4c201feff986b8d7de67095203e6d147053. Jan 20 07:03:03.664000 audit: BPF prog-id=247 op=LOAD Jan 20 07:03:03.665000 audit: BPF prog-id=248 op=LOAD Jan 20 07:03:03.665000 audit[4684]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4599 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363461376631666535326138626230343235393261636436666266 Jan 20 07:03:03.665000 audit: BPF prog-id=248 op=UNLOAD Jan 20 07:03:03.665000 audit[4684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4599 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363461376631666535326138626230343235393261636436666266 Jan 20 07:03:03.666000 audit: BPF prog-id=249 op=LOAD Jan 20 07:03:03.666000 audit[4684]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4599 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363461376631666535326138626230343235393261636436666266 Jan 20 07:03:03.666000 audit: BPF prog-id=250 op=LOAD Jan 20 07:03:03.666000 audit[4684]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4599 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363461376631666535326138626230343235393261636436666266 Jan 20 07:03:03.666000 audit: BPF prog-id=250 op=UNLOAD Jan 20 07:03:03.666000 audit[4684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4599 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363461376631666535326138626230343235393261636436666266 Jan 20 07:03:03.666000 audit: BPF prog-id=249 op=UNLOAD Jan 20 07:03:03.666000 audit[4684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4599 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363461376631666535326138626230343235393261636436666266 Jan 20 07:03:03.666000 audit: BPF prog-id=251 op=LOAD Jan 20 07:03:03.666000 audit[4684]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4599 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363461376631666535326138626230343235393261636436666266 Jan 20 07:03:03.739650 containerd[1604]: time="2026-01-20T07:03:03.739225332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:03.742150 containerd[1604]: time="2026-01-20T07:03:03.741572555Z" level=info msg="StartContainer for \"9764a7f1fe52a8bb042592acd6fbf4c201feff986b8d7de67095203e6d147053\" returns successfully" Jan 20 07:03:03.745726 containerd[1604]: time="2026-01-20T07:03:03.745643879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:03.748406 containerd[1604]: time="2026-01-20T07:03:03.746031079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 07:03:03.750142 kubelet[2867]: E0120 07:03:03.749397 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:03:03.751142 kubelet[2867]: E0120 07:03:03.751054 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:03:03.752208 kubelet[2867]: E0120 07:03:03.751601 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52lh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c495f47-n5kdv_calico-apiserver(18f52096-e17f-46d5-a51d-1ae5ca49fd14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:03.752673 kubelet[2867]: E0120 07:03:03.750846 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:03:03.753897 kubelet[2867]: E0120 07:03:03.753728 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" podUID="18f52096-e17f-46d5-a51d-1ae5ca49fd14" Jan 20 07:03:03.760306 kubelet[2867]: E0120 07:03:03.760215 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:03.779521 systemd-networkd[1498]: cali676cef0ee6e: Link UP Jan 20 07:03:03.785632 systemd-networkd[1498]: cali676cef0ee6e: Gained carrier Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.476 [INFO][4644] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0 calico-kube-controllers-5c4c84c57- calico-system 48809565-7ef3-4c36-a2a9-e27dfb3fe63c 909 0 2026-01-20 07:02:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c4c84c57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-7-121 calico-kube-controllers-5c4c84c57-fbspp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali676cef0ee6e [] [] }} ContainerID="2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" Namespace="calico-system" Pod="calico-kube-controllers-5c4c84c57-fbspp" WorkloadEndpoint="172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-" Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.480 [INFO][4644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" Namespace="calico-system" Pod="calico-kube-controllers-5c4c84c57-fbspp" WorkloadEndpoint="172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0" Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.646 [INFO][4678] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" HandleID="k8s-pod-network.2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" Workload="172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0" Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.646 [INFO][4678] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" HandleID="k8s-pod-network.2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" Workload="172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032b570), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-7-121", "pod":"calico-kube-controllers-5c4c84c57-fbspp", "timestamp":"2026-01-20 07:03:03.646515353 +0000 UTC"}, Hostname:"172-232-7-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.647 [INFO][4678] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.647 [INFO][4678] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.647 [INFO][4678] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-121' Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.669 [INFO][4678] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" host="172-232-7-121" Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.680 [INFO][4678] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-121" Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.698 [INFO][4678] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.700 [INFO][4678] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.704 [INFO][4678] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.704 [INFO][4678] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" host="172-232-7-121" Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.706 [INFO][4678] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887 Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.720 [INFO][4678] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" host="172-232-7-121" Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.755 [INFO][4678] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.135/26] block=192.168.82.128/26 handle="k8s-pod-network.2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" host="172-232-7-121" Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.755 [INFO][4678] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.135/26] handle="k8s-pod-network.2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" host="172-232-7-121" Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.755 [INFO][4678] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 07:03:03.832213 containerd[1604]: 2026-01-20 07:03:03.755 [INFO][4678] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.135/26] IPv6=[] ContainerID="2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" HandleID="k8s-pod-network.2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" Workload="172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0" Jan 20 07:03:03.834292 containerd[1604]: 2026-01-20 07:03:03.763 [INFO][4644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" Namespace="calico-system" Pod="calico-kube-controllers-5c4c84c57-fbspp" WorkloadEndpoint="172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0", GenerateName:"calico-kube-controllers-5c4c84c57-", Namespace:"calico-system", SelfLink:"", UID:"48809565-7ef3-4c36-a2a9-e27dfb3fe63c", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c4c84c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"", Pod:"calico-kube-controllers-5c4c84c57-fbspp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali676cef0ee6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:03.834292 containerd[1604]: 2026-01-20 07:03:03.763 [INFO][4644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.135/32] ContainerID="2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" Namespace="calico-system" Pod="calico-kube-controllers-5c4c84c57-fbspp" WorkloadEndpoint="172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0" Jan 20 07:03:03.834292 containerd[1604]: 2026-01-20 07:03:03.763 [INFO][4644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali676cef0ee6e ContainerID="2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" Namespace="calico-system" Pod="calico-kube-controllers-5c4c84c57-fbspp" WorkloadEndpoint="172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0" Jan 20 07:03:03.834292 containerd[1604]: 2026-01-20 07:03:03.789 [INFO][4644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" Namespace="calico-system" Pod="calico-kube-controllers-5c4c84c57-fbspp" WorkloadEndpoint="172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0" Jan 20 07:03:03.834292 containerd[1604]: 2026-01-20 07:03:03.792 [INFO][4644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" Namespace="calico-system" Pod="calico-kube-controllers-5c4c84c57-fbspp" WorkloadEndpoint="172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0", GenerateName:"calico-kube-controllers-5c4c84c57-", Namespace:"calico-system", SelfLink:"", UID:"48809565-7ef3-4c36-a2a9-e27dfb3fe63c", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c4c84c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887", Pod:"calico-kube-controllers-5c4c84c57-fbspp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali676cef0ee6e", MAC:"a2:59:c7:f5:0a:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:03.834292 containerd[1604]: 2026-01-20 07:03:03.829 [INFO][4644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" Namespace="calico-system" Pod="calico-kube-controllers-5c4c84c57-fbspp" WorkloadEndpoint="172--232--7--121-k8s-calico--kube--controllers--5c4c84c57--fbspp-eth0" Jan 20 07:03:03.889462 containerd[1604]: time="2026-01-20T07:03:03.889376191Z" level=info msg="connecting to shim 2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887" address="unix:///run/containerd/s/875be2419debbc8e7286a85d5ae4f119da6ef3319c85b5adda62bfe5690baba2" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:03:03.910487 systemd-networkd[1498]: calid094c7b5d14: Link UP Jan 20 07:03:03.914839 systemd-networkd[1498]: calid094c7b5d14: Gained carrier Jan 20 07:03:03.951000 audit[4761]: NETFILTER_CFG table=filter:132 family=2 entries=58 op=nft_register_chain pid=4761 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 07:03:03.951000 audit[4761]: SYSCALL arch=c000003e syscall=46 success=yes exit=27164 a0=3 a1=7ffc613f6370 a2=0 a3=7ffc613f635c items=0 ppid=4279 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:03.951000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.482 [INFO][4642] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0 calico-apiserver-6c495f47- calico-apiserver add8a880-515a-44f3-9fed-8077d26ba5b6 911 0 2026-01-20 07:02:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c495f47 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-7-121 calico-apiserver-6c495f47-pkkt5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid094c7b5d14 [] [] }} ContainerID="c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-pkkt5" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-" Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.484 [INFO][4642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-pkkt5" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0" Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.656 [INFO][4675] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" HandleID="k8s-pod-network.c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" Workload="172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0" Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.677 [INFO][4675] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" HandleID="k8s-pod-network.c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" Workload="172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003936f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-7-121", "pod":"calico-apiserver-6c495f47-pkkt5", "timestamp":"2026-01-20 07:03:03.656104623 +0000 UTC"}, Hostname:"172-232-7-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.677 [INFO][4675] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.757 [INFO][4675] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.757 [INFO][4675] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-121' Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.776 [INFO][4675] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" host="172-232-7-121" Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.795 [INFO][4675] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-121" Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.818 [INFO][4675] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.836 [INFO][4675] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.847 [INFO][4675] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="172-232-7-121" Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.849 [INFO][4675] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" host="172-232-7-121" Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.855 [INFO][4675] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9 Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.875 [INFO][4675] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" host="172-232-7-121" Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.885 [INFO][4675] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.136/26] block=192.168.82.128/26 handle="k8s-pod-network.c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" host="172-232-7-121" Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.888 [INFO][4675] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.136/26] handle="k8s-pod-network.c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" host="172-232-7-121" Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.888 [INFO][4675] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 07:03:03.970616 containerd[1604]: 2026-01-20 07:03:03.888 [INFO][4675] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.136/26] IPv6=[] ContainerID="c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" HandleID="k8s-pod-network.c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" Workload="172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0" Jan 20 07:03:03.972960 containerd[1604]: 2026-01-20 07:03:03.898 [INFO][4642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-pkkt5" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0", GenerateName:"calico-apiserver-6c495f47-", Namespace:"calico-apiserver", SelfLink:"", UID:"add8a880-515a-44f3-9fed-8077d26ba5b6", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c495f47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"", Pod:"calico-apiserver-6c495f47-pkkt5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid094c7b5d14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:03.972960 containerd[1604]: 2026-01-20 07:03:03.898 [INFO][4642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.136/32] ContainerID="c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-pkkt5" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0" Jan 20 07:03:03.972960 containerd[1604]: 2026-01-20 07:03:03.898 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid094c7b5d14 ContainerID="c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-pkkt5" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0" Jan 20 07:03:03.972960 containerd[1604]: 2026-01-20 07:03:03.919 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-pkkt5" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0" Jan 20 07:03:03.972960 containerd[1604]: 2026-01-20 07:03:03.925 [INFO][4642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-pkkt5" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0", GenerateName:"calico-apiserver-6c495f47-", Namespace:"calico-apiserver", SelfLink:"", UID:"add8a880-515a-44f3-9fed-8077d26ba5b6", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 7, 2, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c495f47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-121", ContainerID:"c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9", Pod:"calico-apiserver-6c495f47-pkkt5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid094c7b5d14", MAC:"e2:f7:1a:6a:9a:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 07:03:03.972960 containerd[1604]: 2026-01-20 07:03:03.961 [INFO][4642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" Namespace="calico-apiserver" Pod="calico-apiserver-6c495f47-pkkt5" WorkloadEndpoint="172--232--7--121-k8s-calico--apiserver--6c495f47--pkkt5-eth0" Jan 20 07:03:04.006336 systemd[1]: Started cri-containerd-2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887.scope - libcontainer container 2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887. Jan 20 07:03:04.032245 containerd[1604]: time="2026-01-20T07:03:04.031948199Z" level=info msg="connecting to shim c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9" address="unix:///run/containerd/s/b2832fd4f87ff47c1361e6ca2ce3e943b040a1ef9d23f43289a217f507c3cfc6" namespace=k8s.io protocol=ttrpc version=3 Jan 20 07:03:04.043000 audit[4778]: NETFILTER_CFG table=filter:133 family=2 entries=53 op=nft_register_chain pid=4778 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 07:03:04.043000 audit[4778]: SYSCALL arch=c000003e syscall=46 success=yes exit=26608 a0=3 a1=7fff04ae9b50 a2=0 a3=7fff04ae9b3c items=0 ppid=4279 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.043000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 07:03:04.085000 audit: BPF prog-id=252 op=LOAD Jan 20 07:03:04.086000 audit: BPF prog-id=253 op=LOAD Jan 20 07:03:04.086000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4745 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383334303735313561616337393638363634616137346164656134 Jan 20 07:03:04.087000 audit: BPF prog-id=253 op=UNLOAD Jan 20 07:03:04.087000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4745 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383334303735313561616337393638363634616137346164656134 Jan 20 07:03:04.087000 audit: BPF prog-id=254 op=LOAD Jan 20 07:03:04.087000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4745 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383334303735313561616337393638363634616137346164656134 Jan 20 07:03:04.088000 audit: BPF prog-id=255 op=LOAD Jan 20 07:03:04.088000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4745 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383334303735313561616337393638363634616137346164656134 Jan 20 07:03:04.089000 audit: BPF prog-id=255 op=UNLOAD Jan 20 07:03:04.089000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4745 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383334303735313561616337393638363634616137346164656134 Jan 20 07:03:04.089000 audit: BPF prog-id=254 op=UNLOAD Jan 20 07:03:04.089000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4745 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383334303735313561616337393638363634616137346164656134 Jan 20 07:03:04.089000 audit: BPF prog-id=256 op=LOAD Jan 20 07:03:04.089000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4745 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234383334303735313561616337393638363634616137346164656134 Jan 20 07:03:04.102443 systemd[1]: Started cri-containerd-c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9.scope - libcontainer container c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9. Jan 20 07:03:04.124000 audit: BPF prog-id=257 op=LOAD Jan 20 07:03:04.125000 audit: BPF prog-id=258 op=LOAD Jan 20 07:03:04.125000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4791 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331626665326263643062386264326230643330316635333964666261 Jan 20 07:03:04.125000 audit: BPF prog-id=258 op=UNLOAD Jan 20 07:03:04.125000 audit[4804]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4791 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331626665326263643062386264326230643330316635333964666261 Jan 20 07:03:04.125000 audit: BPF prog-id=259 op=LOAD Jan 20 07:03:04.125000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4791 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331626665326263643062386264326230643330316635333964666261 Jan 20 07:03:04.125000 audit: BPF prog-id=260 op=LOAD Jan 20 07:03:04.125000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4791 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331626665326263643062386264326230643330316635333964666261 Jan 20 07:03:04.126000 audit: BPF prog-id=260 op=UNLOAD Jan 20 07:03:04.126000 audit[4804]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4791 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331626665326263643062386264326230643330316635333964666261 Jan 20 07:03:04.126000 audit: BPF prog-id=259 op=UNLOAD Jan 20 07:03:04.126000 audit[4804]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4791 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331626665326263643062386264326230643330316635333964666261 Jan 20 07:03:04.126000 audit: BPF prog-id=261 op=LOAD Jan 20 07:03:04.126000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4791 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331626665326263643062386264326230643330316635333964666261 Jan 20 07:03:04.163935 containerd[1604]: time="2026-01-20T07:03:04.163857045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c4c84c57-fbspp,Uid:48809565-7ef3-4c36-a2a9-e27dfb3fe63c,Namespace:calico-system,Attempt:0,} returns sandbox id \"2483407515aac7968664aa74adea488f9344f537b937c9d5a4f12b308ad83887\"" Jan 20 07:03:04.178487 containerd[1604]: time="2026-01-20T07:03:04.178415424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 07:03:04.195815 containerd[1604]: time="2026-01-20T07:03:04.195418798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c495f47-pkkt5,Uid:add8a880-515a-44f3-9fed-8077d26ba5b6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c1bfe2bcd0b8bd2b0d301f539dfba06def705069561b6fbaface97eb265dacd9\"" Jan 20 07:03:04.226396 systemd-networkd[1498]: cali1cd0bb67ee7: Gained IPv6LL Jan 20 07:03:04.502034 containerd[1604]: time="2026-01-20T07:03:04.501631302Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:04.504446 containerd[1604]: time="2026-01-20T07:03:04.504386218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 07:03:04.504550 containerd[1604]: time="2026-01-20T07:03:04.504521382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:04.505496 kubelet[2867]: E0120 07:03:04.505044 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 07:03:04.505496 kubelet[2867]: E0120 07:03:04.505244 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 07:03:04.506199 kubelet[2867]: E0120 07:03:04.505649 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kn6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c4c84c57-fbspp_calico-system(48809565-7ef3-4c36-a2a9-e27dfb3fe63c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:04.506546 containerd[1604]: time="2026-01-20T07:03:04.506488231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 07:03:04.507505 kubelet[2867]: E0120 07:03:04.507073 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" podUID="48809565-7ef3-4c36-a2a9-e27dfb3fe63c" Jan 20 07:03:04.667482 containerd[1604]: time="2026-01-20T07:03:04.667411992Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:04.668961 containerd[1604]: time="2026-01-20T07:03:04.668917954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 07:03:04.669026 containerd[1604]: time="2026-01-20T07:03:04.669008158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:04.669291 kubelet[2867]: E0120 07:03:04.669230 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:03:04.669359 kubelet[2867]: E0120 07:03:04.669304 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:03:04.670602 kubelet[2867]: E0120 07:03:04.669680 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhxz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c495f47-pkkt5_calico-apiserver(add8a880-515a-44f3-9fed-8077d26ba5b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:04.670903 kubelet[2867]: E0120 07:03:04.670861 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" podUID="add8a880-515a-44f3-9fed-8077d26ba5b6" Jan 20 07:03:04.674886 systemd-networkd[1498]: calib5a6679f433: Gained IPv6LL Jan 20 07:03:04.764843 kubelet[2867]: E0120 07:03:04.763473 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:04.766658 kubelet[2867]: E0120 07:03:04.766624 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" podUID="add8a880-515a-44f3-9fed-8077d26ba5b6" Jan 20 07:03:04.771615 kubelet[2867]: E0120 07:03:04.771581 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" podUID="18f52096-e17f-46d5-a51d-1ae5ca49fd14" Jan 20 07:03:04.772060 kubelet[2867]: E0120 07:03:04.772035 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" podUID="48809565-7ef3-4c36-a2a9-e27dfb3fe63c" Jan 20 07:03:04.772516 kubelet[2867]: E0120 07:03:04.772429 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:03:04.804400 kubelet[2867]: I0120 07:03:04.804290 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8vm6n" podStartSLOduration=52.804231899 podStartE2EDuration="52.804231899s" podCreationTimestamp="2026-01-20 07:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 07:03:04.80053825 +0000 UTC m=+57.831211835" watchObservedRunningTime="2026-01-20 07:03:04.804231899 +0000 UTC m=+57.834905444" Jan 20 07:03:04.851000 audit[4838]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:04.851000 audit[4838]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcc5ff9d00 a2=0 a3=7ffcc5ff9cec items=0 ppid=3020 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:04.857000 audit[4838]: NETFILTER_CFG table=nat:135 family=2 entries=44 op=nft_register_rule pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:04.857000 audit[4838]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcc5ff9d00 a2=0 a3=7ffcc5ff9cec items=0 ppid=3020 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.857000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:04.901000 audit[4840]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:04.901000 audit[4840]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd08d84350 a2=0 a3=7ffd08d8433c items=0 ppid=3020 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:04.909000 audit[4840]: NETFILTER_CFG table=nat:137 family=2 entries=20 op=nft_register_rule pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:04.909000 audit[4840]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd08d84350 a2=0 a3=7ffd08d8433c items=0 ppid=3020 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:04.909000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:04.932424 systemd-networkd[1498]: cali7f285efff4b: Gained IPv6LL Jan 20 07:03:05.058425 systemd-networkd[1498]: calid094c7b5d14: Gained IPv6LL Jan 20 07:03:05.187291 systemd-networkd[1498]: cali676cef0ee6e: Gained IPv6LL Jan 20 07:03:05.773212 kubelet[2867]: E0120 07:03:05.773131 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" podUID="18f52096-e17f-46d5-a51d-1ae5ca49fd14" Jan 20 07:03:05.773900 kubelet[2867]: E0120 07:03:05.773236 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" podUID="add8a880-515a-44f3-9fed-8077d26ba5b6" Jan 20 07:03:05.775338 kubelet[2867]: E0120 07:03:05.774711 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:05.775584 kubelet[2867]: E0120 07:03:05.775544 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" podUID="48809565-7ef3-4c36-a2a9-e27dfb3fe63c" Jan 20 07:03:05.936000 audit[4842]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4842 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:05.938228 kernel: kauditd_printk_skb: 403 callbacks suppressed Jan 20 07:03:05.938357 kernel: audit: type=1325 audit(1768892585.936:745): table=filter:138 family=2 entries=14 op=nft_register_rule pid=4842 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:05.936000 audit[4842]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff8da72550 a2=0 a3=7fff8da7253c items=0 ppid=3020 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:05.955036 kernel: audit: type=1300 audit(1768892585.936:745): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff8da72550 a2=0 a3=7fff8da7253c items=0 ppid=3020 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:05.955135 kernel: audit: type=1327 audit(1768892585.936:745): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:05.936000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:05.977000 audit[4842]: NETFILTER_CFG table=nat:139 family=2 entries=56 op=nft_register_chain pid=4842 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:05.977000 audit[4842]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff8da72550 a2=0 a3=7fff8da7253c items=0 ppid=3020 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:05.984310 kernel: audit: type=1325 audit(1768892585.977:746): table=nat:139 family=2 entries=56 op=nft_register_chain pid=4842 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:03:05.984360 kernel: audit: type=1300 audit(1768892585.977:746): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff8da72550 a2=0 a3=7fff8da7253c items=0 ppid=3020 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:03:06.005730 kernel: audit: type=1327 audit(1768892585.977:746): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:05.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:03:06.774316 kubelet[2867]: E0120 07:03:06.774176 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:11.265220 containerd[1604]: time="2026-01-20T07:03:11.258221060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 07:03:11.410291 containerd[1604]: time="2026-01-20T07:03:11.409494079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:11.416497 containerd[1604]: time="2026-01-20T07:03:11.416403919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 07:03:11.417015 containerd[1604]: time="2026-01-20T07:03:11.416667876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:11.417456 kubelet[2867]: E0120 07:03:11.417373 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 07:03:11.427784 kubelet[2867]: E0120 07:03:11.417856 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 07:03:11.427784 kubelet[2867]: E0120 07:03:11.427569 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed0c70bbfbc248a6a5f52e94a287b2e5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rbc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547dfbb977-shb9n_calico-system(32ea1474-9105-468a-bde8-3bef92925725): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:11.438138 containerd[1604]: time="2026-01-20T07:03:11.436936401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 07:03:11.603915 containerd[1604]: time="2026-01-20T07:03:11.603592024Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:11.609956 containerd[1604]: time="2026-01-20T07:03:11.605941922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 07:03:11.609956 containerd[1604]: time="2026-01-20T07:03:11.606057546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:11.610237 kubelet[2867]: E0120 07:03:11.606384 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 07:03:11.610237 kubelet[2867]: E0120 07:03:11.606491 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 07:03:11.610237 kubelet[2867]: E0120 07:03:11.608140 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rbc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547dfbb977-shb9n_calico-system(32ea1474-9105-468a-bde8-3bef92925725): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:11.611976 kubelet[2867]: E0120 07:03:11.611672 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547dfbb977-shb9n" podUID="32ea1474-9105-468a-bde8-3bef92925725" Jan 20 07:03:16.247774 containerd[1604]: time="2026-01-20T07:03:16.247705636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 07:03:16.376407 containerd[1604]: time="2026-01-20T07:03:16.376330431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:16.377612 containerd[1604]: time="2026-01-20T07:03:16.377546132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 07:03:16.377804 containerd[1604]: time="2026-01-20T07:03:16.377637404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:16.377842 kubelet[2867]: E0120 07:03:16.377791 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 07:03:16.378288 kubelet[2867]: E0120 07:03:16.377870 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 07:03:16.378288 kubelet[2867]: E0120 07:03:16.378200 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bljpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8krch_calico-system(930ba9b4-4a35-4f62-858d-858957a6d7e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:16.379749 kubelet[2867]: E0120 07:03:16.379659 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8" Jan 20 07:03:17.250357 containerd[1604]: time="2026-01-20T07:03:17.249994510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 07:03:17.378490 containerd[1604]: time="2026-01-20T07:03:17.378418837Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:17.379671 containerd[1604]: time="2026-01-20T07:03:17.379625446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 07:03:17.380141 containerd[1604]: time="2026-01-20T07:03:17.379725818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:17.380225 kubelet[2867]: E0120 07:03:17.379950 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 07:03:17.380225 kubelet[2867]: E0120 07:03:17.380014 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 07:03:17.380610 kubelet[2867]: E0120 07:03:17.380259 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb7xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6jwn_calico-system(dd9207e8-fe1e-43a2-ab22-bb4ac860e560): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:17.383355 containerd[1604]: time="2026-01-20T07:03:17.383283546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 07:03:17.515259 containerd[1604]: time="2026-01-20T07:03:17.515094245Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:17.516314 containerd[1604]: time="2026-01-20T07:03:17.516278025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 07:03:17.517290 containerd[1604]: time="2026-01-20T07:03:17.516358897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:17.517433 kubelet[2867]: E0120 07:03:17.516582 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 07:03:17.517433 kubelet[2867]: E0120 07:03:17.516635 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 07:03:17.517433 kubelet[2867]: E0120 07:03:17.516787 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb7xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6jwn_calico-system(dd9207e8-fe1e-43a2-ab22-bb4ac860e560): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:17.518393 kubelet[2867]: E0120 07:03:17.518329 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:03:18.249775 containerd[1604]: time="2026-01-20T07:03:18.248266496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 07:03:18.380209 containerd[1604]: time="2026-01-20T07:03:18.380106333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:18.382909 containerd[1604]: time="2026-01-20T07:03:18.382843169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 07:03:18.382991 containerd[1604]: time="2026-01-20T07:03:18.382861970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:18.383294 kubelet[2867]: E0120 07:03:18.383228 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:03:18.383294 kubelet[2867]: E0120 07:03:18.383287 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:03:18.384108 kubelet[2867]: E0120 07:03:18.383519 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52lh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c495f47-n5kdv_calico-apiserver(18f52096-e17f-46d5-a51d-1ae5ca49fd14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:18.385532 kubelet[2867]: E0120 07:03:18.385439 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" podUID="18f52096-e17f-46d5-a51d-1ae5ca49fd14" Jan 20 07:03:19.250672 containerd[1604]: time="2026-01-20T07:03:19.250336886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 07:03:19.392967 containerd[1604]: time="2026-01-20T07:03:19.392879971Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:19.396199 containerd[1604]: time="2026-01-20T07:03:19.396131537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 07:03:19.396496 containerd[1604]: time="2026-01-20T07:03:19.396163048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:19.396648 kubelet[2867]: E0120 07:03:19.396609 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 07:03:19.397833 kubelet[2867]: E0120 07:03:19.396684 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 07:03:19.397833 kubelet[2867]: E0120 07:03:19.397620 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kn6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c4c84c57-fbspp_calico-system(48809565-7ef3-4c36-a2a9-e27dfb3fe63c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:19.399534 kubelet[2867]: E0120 07:03:19.399500 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" podUID="48809565-7ef3-4c36-a2a9-e27dfb3fe63c" Jan 20 07:03:21.247868 containerd[1604]: time="2026-01-20T07:03:21.247512448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 07:03:21.376706 containerd[1604]: time="2026-01-20T07:03:21.376619428Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:21.377866 containerd[1604]: time="2026-01-20T07:03:21.377836335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 07:03:21.377964 containerd[1604]: time="2026-01-20T07:03:21.377915717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:21.378218 kubelet[2867]: E0120 07:03:21.378128 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:03:21.378218 kubelet[2867]: E0120 07:03:21.378214 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:03:21.379162 kubelet[2867]: E0120 07:03:21.378585 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhxz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c495f47-pkkt5_calico-apiserver(add8a880-515a-44f3-9fed-8077d26ba5b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:21.380045 kubelet[2867]: E0120 07:03:21.380016 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" podUID="add8a880-515a-44f3-9fed-8077d26ba5b6" Jan 20 07:03:22.246028 kubelet[2867]: E0120 07:03:22.245919 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:26.247882 kubelet[2867]: E0120 07:03:26.247448 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547dfbb977-shb9n" podUID="32ea1474-9105-468a-bde8-3bef92925725" Jan 20 07:03:27.882217 kubelet[2867]: E0120 07:03:27.881580 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:29.250868 kubelet[2867]: E0120 07:03:29.250810 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" podUID="18f52096-e17f-46d5-a51d-1ae5ca49fd14" Jan 20 07:03:29.257138 kubelet[2867]: E0120 07:03:29.257085 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:03:31.249206 kubelet[2867]: E0120 07:03:31.249126 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:31.253800 kubelet[2867]: E0120 07:03:31.253736 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8" Jan 20 07:03:32.249120 kubelet[2867]: E0120 07:03:32.248997 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" podUID="add8a880-515a-44f3-9fed-8077d26ba5b6" Jan 20 07:03:34.248383 kubelet[2867]: E0120 07:03:34.247969 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" podUID="48809565-7ef3-4c36-a2a9-e27dfb3fe63c" Jan 20 07:03:35.247408 kubelet[2867]: E0120 07:03:35.247211 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:39.248247 kubelet[2867]: E0120 07:03:39.247092 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:03:40.252450 containerd[1604]: time="2026-01-20T07:03:40.252315601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 07:03:40.390415 containerd[1604]: time="2026-01-20T07:03:40.390360275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:40.391857 containerd[1604]: time="2026-01-20T07:03:40.391736625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 07:03:40.391857 containerd[1604]: time="2026-01-20T07:03:40.391825356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:40.392080 kubelet[2867]: E0120 07:03:40.392011 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 07:03:40.392577 kubelet[2867]: E0120 07:03:40.392115 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 07:03:40.392969 kubelet[2867]: E0120 07:03:40.392884 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb7xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6jwn_calico-system(dd9207e8-fe1e-43a2-ab22-bb4ac860e560): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:40.395285 containerd[1604]: time="2026-01-20T07:03:40.395253175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 07:03:40.520336 containerd[1604]: time="2026-01-20T07:03:40.519530295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:40.521480 containerd[1604]: time="2026-01-20T07:03:40.521381911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 07:03:40.521480 containerd[1604]: time="2026-01-20T07:03:40.521504022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:40.535339 kubelet[2867]: E0120 07:03:40.535164 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 07:03:40.535485 kubelet[2867]: E0120 07:03:40.535398 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 07:03:40.538409 kubelet[2867]: E0120 07:03:40.538259 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb7xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6jwn_calico-system(dd9207e8-fe1e-43a2-ab22-bb4ac860e560): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:40.539665 kubelet[2867]: E0120 07:03:40.539595 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:03:41.251224 containerd[1604]: time="2026-01-20T07:03:41.250965028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 07:03:41.403636 containerd[1604]: time="2026-01-20T07:03:41.403584171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:41.405254 containerd[1604]: time="2026-01-20T07:03:41.405209984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 07:03:41.405436 containerd[1604]: time="2026-01-20T07:03:41.405311285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:41.405802 kubelet[2867]: E0120 07:03:41.405711 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 07:03:41.405802 kubelet[2867]: E0120 07:03:41.405784 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 07:03:41.407315 kubelet[2867]: E0120 07:03:41.406535 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed0c70bbfbc248a6a5f52e94a287b2e5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rbc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547dfbb977-shb9n_calico-system(32ea1474-9105-468a-bde8-3bef92925725): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:41.407474 containerd[1604]: time="2026-01-20T07:03:41.406622263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 07:03:41.534739 containerd[1604]: time="2026-01-20T07:03:41.534687377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:41.536414 containerd[1604]: time="2026-01-20T07:03:41.536334140Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 07:03:41.536414 containerd[1604]: time="2026-01-20T07:03:41.536383790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:41.536811 kubelet[2867]: E0120 07:03:41.536765 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:03:41.536892 kubelet[2867]: E0120 07:03:41.536822 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:03:41.537440 kubelet[2867]: E0120 07:03:41.537367 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52lh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c495f47-n5kdv_calico-apiserver(18f52096-e17f-46d5-a51d-1ae5ca49fd14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:41.538225 containerd[1604]: time="2026-01-20T07:03:41.538176255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 07:03:41.538590 kubelet[2867]: E0120 07:03:41.538522 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" podUID="18f52096-e17f-46d5-a51d-1ae5ca49fd14" Jan 20 07:03:41.670598 containerd[1604]: time="2026-01-20T07:03:41.670384976Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:41.672059 containerd[1604]: time="2026-01-20T07:03:41.671718805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 07:03:41.672059 containerd[1604]: time="2026-01-20T07:03:41.671854327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:41.673063 kubelet[2867]: E0120 07:03:41.672574 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 07:03:41.673063 kubelet[2867]: E0120 07:03:41.672719 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 07:03:41.673063 kubelet[2867]: E0120 07:03:41.672969 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rbc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547dfbb977-shb9n_calico-system(32ea1474-9105-468a-bde8-3bef92925725): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:41.674620 kubelet[2867]: E0120 07:03:41.674561 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547dfbb977-shb9n" podUID="32ea1474-9105-468a-bde8-3bef92925725" Jan 20 07:03:45.255770 containerd[1604]: time="2026-01-20T07:03:45.254930667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 07:03:45.406609 containerd[1604]: time="2026-01-20T07:03:45.406487624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:45.408115 containerd[1604]: time="2026-01-20T07:03:45.408065804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:45.408470 containerd[1604]: time="2026-01-20T07:03:45.408346708Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 07:03:45.409213 kubelet[2867]: E0120 07:03:45.409084 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:03:45.411879 kubelet[2867]: E0120 07:03:45.410364 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:03:45.411879 kubelet[2867]: E0120 07:03:45.410672 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhxz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c495f47-pkkt5_calico-apiserver(add8a880-515a-44f3-9fed-8077d26ba5b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:45.411879 kubelet[2867]: E0120 07:03:45.411811 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" podUID="add8a880-515a-44f3-9fed-8077d26ba5b6" Jan 20 07:03:45.413455 containerd[1604]: time="2026-01-20T07:03:45.412447349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 07:03:45.549828 containerd[1604]: time="2026-01-20T07:03:45.549388061Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:45.550535 containerd[1604]: time="2026-01-20T07:03:45.550493656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 07:03:45.550716 containerd[1604]: time="2026-01-20T07:03:45.550685488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:45.553503 kubelet[2867]: E0120 07:03:45.552913 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 07:03:45.553503 kubelet[2867]: E0120 07:03:45.553460 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 07:03:45.555387 kubelet[2867]: E0120 07:03:45.555288 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kn6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c4c84c57-fbspp_calico-system(48809565-7ef3-4c36-a2a9-e27dfb3fe63c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:45.557513 kubelet[2867]: E0120 07:03:45.557454 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" podUID="48809565-7ef3-4c36-a2a9-e27dfb3fe63c" Jan 20 07:03:46.251605 containerd[1604]: time="2026-01-20T07:03:46.251550888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 07:03:46.385977 containerd[1604]: time="2026-01-20T07:03:46.385903823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:03:46.388077 containerd[1604]: time="2026-01-20T07:03:46.388020809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 07:03:46.388163 containerd[1604]: time="2026-01-20T07:03:46.388027039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 07:03:46.388387 kubelet[2867]: E0120 07:03:46.388332 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 07:03:46.388470 kubelet[2867]: E0120 07:03:46.388400 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 07:03:46.388638 kubelet[2867]: E0120 07:03:46.388554 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bljpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8krch_calico-system(930ba9b4-4a35-4f62-858d-858957a6d7e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 07:03:46.390093 kubelet[2867]: E0120 07:03:46.390059 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8" Jan 20 07:03:52.250582 kubelet[2867]: E0120 07:03:52.250504 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:03:53.253268 kubelet[2867]: E0120 07:03:53.253090 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547dfbb977-shb9n" podUID="32ea1474-9105-468a-bde8-3bef92925725" Jan 20 07:03:56.251999 kubelet[2867]: E0120 07:03:56.251859 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" podUID="add8a880-515a-44f3-9fed-8077d26ba5b6" Jan 20 07:03:57.249738 kubelet[2867]: E0120 07:03:57.249239 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" podUID="18f52096-e17f-46d5-a51d-1ae5ca49fd14" Jan 20 07:04:01.258415 kubelet[2867]: E0120 07:04:01.258287 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" podUID="48809565-7ef3-4c36-a2a9-e27dfb3fe63c" Jan 20 07:04:02.249579 kubelet[2867]: E0120 07:04:02.249097 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8" Jan 20 07:04:04.260747 kubelet[2867]: E0120 07:04:04.260099 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547dfbb977-shb9n" podUID="32ea1474-9105-468a-bde8-3bef92925725" Jan 20 07:04:07.256126 kubelet[2867]: E0120 07:04:07.255928 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:04:10.255490 kubelet[2867]: E0120 07:04:10.255036 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" podUID="18f52096-e17f-46d5-a51d-1ae5ca49fd14" Jan 20 07:04:11.250305 kubelet[2867]: E0120 07:04:11.250230 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" podUID="add8a880-515a-44f3-9fed-8077d26ba5b6" Jan 20 07:04:13.256778 kubelet[2867]: E0120 07:04:13.255621 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8" Jan 20 07:04:15.255501 kubelet[2867]: E0120 07:04:15.255234 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" podUID="48809565-7ef3-4c36-a2a9-e27dfb3fe63c" Jan 20 07:04:15.261556 kubelet[2867]: E0120 07:04:15.261481 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547dfbb977-shb9n" podUID="32ea1474-9105-468a-bde8-3bef92925725" Jan 20 07:04:19.253265 kubelet[2867]: E0120 07:04:19.252144 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:04:19.258964 kubelet[2867]: E0120 07:04:19.258898 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:04:21.250194 kubelet[2867]: E0120 07:04:21.250105 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:04:25.251404 containerd[1604]: time="2026-01-20T07:04:25.250761821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 07:04:25.652611 containerd[1604]: time="2026-01-20T07:04:25.652544308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:04:25.654125 containerd[1604]: time="2026-01-20T07:04:25.654045027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 07:04:25.654305 containerd[1604]: time="2026-01-20T07:04:25.654277210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 07:04:25.655113 kubelet[2867]: E0120 07:04:25.654479 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:04:25.655113 kubelet[2867]: E0120 07:04:25.654618 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:04:25.655113 kubelet[2867]: E0120 07:04:25.654990 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52lh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c495f47-n5kdv_calico-apiserver(18f52096-e17f-46d5-a51d-1ae5ca49fd14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 07:04:25.656535 kubelet[2867]: E0120 07:04:25.656238 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" podUID="18f52096-e17f-46d5-a51d-1ae5ca49fd14" Jan 20 07:04:26.250759 containerd[1604]: time="2026-01-20T07:04:26.250632315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 07:04:26.387264 containerd[1604]: time="2026-01-20T07:04:26.387130884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:04:26.391207 containerd[1604]: time="2026-01-20T07:04:26.390047063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 07:04:26.391535 containerd[1604]: time="2026-01-20T07:04:26.390281934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 07:04:26.391952 kubelet[2867]: E0120 07:04:26.391883 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:04:26.392232 kubelet[2867]: E0120 07:04:26.391967 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 07:04:26.393788 kubelet[2867]: E0120 07:04:26.392275 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhxz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c495f47-pkkt5_calico-apiserver(add8a880-515a-44f3-9fed-8077d26ba5b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 07:04:26.395049 kubelet[2867]: E0120 07:04:26.395002 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" podUID="add8a880-515a-44f3-9fed-8077d26ba5b6" Jan 20 07:04:27.260479 containerd[1604]: time="2026-01-20T07:04:27.260139688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 07:04:27.397478 containerd[1604]: time="2026-01-20T07:04:27.397411054Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:04:27.401216 containerd[1604]: time="2026-01-20T07:04:27.400432613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 07:04:27.402208 containerd[1604]: time="2026-01-20T07:04:27.400635324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 07:04:27.402489 kubelet[2867]: E0120 07:04:27.402329 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 07:04:27.403709 kubelet[2867]: E0120 07:04:27.402565 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 07:04:27.403709 kubelet[2867]: E0120 07:04:27.403174 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bljpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8krch_calico-system(930ba9b4-4a35-4f62-858d-858957a6d7e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 07:04:27.404778 kubelet[2867]: E0120 07:04:27.404710 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8" Jan 20 07:04:28.248532 containerd[1604]: time="2026-01-20T07:04:28.248267983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 07:04:28.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.232.7.121:22-20.161.92.111:51196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:28.308064 systemd[1]: Started sshd@9-172.232.7.121:22-20.161.92.111:51196.service - OpenSSH per-connection server daemon (20.161.92.111:51196). Jan 20 07:04:28.318279 kernel: audit: type=1130 audit(1768892668.308:747): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.232.7.121:22-20.161.92.111:51196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:28.408224 containerd[1604]: time="2026-01-20T07:04:28.407362609Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:04:28.410211 containerd[1604]: time="2026-01-20T07:04:28.409661643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 07:04:28.411331 kubelet[2867]: E0120 07:04:28.410629 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 07:04:28.414214 containerd[1604]: time="2026-01-20T07:04:28.410022276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 07:04:28.414295 kubelet[2867]: E0120 07:04:28.413265 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 07:04:28.414295 kubelet[2867]: E0120 07:04:28.413724 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kn6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c4c84c57-fbspp_calico-system(48809565-7ef3-4c36-a2a9-e27dfb3fe63c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 07:04:28.415561 containerd[1604]: time="2026-01-20T07:04:28.414608006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 07:04:28.416205 kubelet[2867]: E0120 07:04:28.415539 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" podUID="48809565-7ef3-4c36-a2a9-e27dfb3fe63c" Jan 20 07:04:28.550395 containerd[1604]: time="2026-01-20T07:04:28.550233171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:04:28.551832 containerd[1604]: time="2026-01-20T07:04:28.551691830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 07:04:28.551832 containerd[1604]: time="2026-01-20T07:04:28.551799210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 07:04:28.552545 kubelet[2867]: E0120 07:04:28.552422 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 07:04:28.552545 kubelet[2867]: E0120 07:04:28.552525 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 07:04:28.551000 audit[4977]: USER_ACCT pid=4977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:28.553439 kubelet[2867]: E0120 07:04:28.553378 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed0c70bbfbc248a6a5f52e94a287b2e5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rbc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547dfbb977-shb9n_calico-system(32ea1474-9105-468a-bde8-3bef92925725): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 07:04:28.561214 kernel: audit: type=1101 audit(1768892668.551:748): pid=4977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:28.561626 sshd[4977]: Accepted publickey for core from 20.161.92.111 port 51196 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:04:28.563448 containerd[1604]: time="2026-01-20T07:04:28.563404535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 07:04:28.563000 audit[4977]: CRED_ACQ pid=4977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:28.577205 kernel: audit: type=1103 audit(1768892668.563:749): pid=4977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:28.577968 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:04:28.587234 kernel: audit: type=1006 audit(1768892668.564:750): pid=4977 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 07:04:28.564000 audit[4977]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3fa4a350 a2=3 a3=0 items=0 ppid=1 pid=4977 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:28.599163 kernel: audit: type=1300 audit(1768892668.564:750): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3fa4a350 a2=3 a3=0 items=0 ppid=1 pid=4977 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:28.599398 kernel: audit: type=1327 audit(1768892668.564:750): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:28.564000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:28.615854 systemd-logind[1577]: New session 11 of user core. Jan 20 07:04:28.627650 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 07:04:28.636000 audit[4977]: USER_START pid=4977 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:28.648238 kernel: audit: type=1105 audit(1768892668.636:751): pid=4977 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:28.643000 audit[4981]: CRED_ACQ pid=4981 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:28.665216 kernel: audit: type=1103 audit(1768892668.643:752): pid=4981 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:28.731352 containerd[1604]: time="2026-01-20T07:04:28.730549674Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:04:28.735372 containerd[1604]: time="2026-01-20T07:04:28.735292494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 07:04:28.735527 containerd[1604]: time="2026-01-20T07:04:28.735424685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 07:04:28.736979 kubelet[2867]: E0120 07:04:28.735658 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 07:04:28.736979 kubelet[2867]: E0120 07:04:28.735743 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 07:04:28.737283 kubelet[2867]: E0120 07:04:28.736146 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rbc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547dfbb977-shb9n_calico-system(32ea1474-9105-468a-bde8-3bef92925725): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 07:04:28.739166 kubelet[2867]: E0120 07:04:28.738627 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547dfbb977-shb9n" podUID="32ea1474-9105-468a-bde8-3bef92925725" Jan 20 07:04:28.897705 sshd[4981]: Connection closed by 20.161.92.111 port 51196 Jan 20 07:04:28.899690 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Jan 20 07:04:28.902000 audit[4977]: USER_END pid=4977 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:28.915235 kernel: audit: type=1106 audit(1768892668.902:753): pid=4977 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:28.914000 audit[4977]: CRED_DISP pid=4977 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:28.924366 systemd[1]: sshd@9-172.232.7.121:22-20.161.92.111:51196.service: Deactivated successfully. Jan 20 07:04:28.927260 kernel: audit: type=1104 audit(1768892668.914:754): pid=4977 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:28.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.232.7.121:22-20.161.92.111:51196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:28.933721 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 07:04:28.938268 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Jan 20 07:04:28.945096 systemd-logind[1577]: Removed session 11. Jan 20 07:04:30.248447 containerd[1604]: time="2026-01-20T07:04:30.248337525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 07:04:30.400750 containerd[1604]: time="2026-01-20T07:04:30.400512887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:04:30.402604 containerd[1604]: time="2026-01-20T07:04:30.402451919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 07:04:30.402785 containerd[1604]: time="2026-01-20T07:04:30.402660030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 07:04:30.403910 kubelet[2867]: E0120 07:04:30.403695 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 07:04:30.405436 kubelet[2867]: E0120 07:04:30.403871 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 07:04:30.406347 kubelet[2867]: E0120 07:04:30.405887 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb7xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6jwn_calico-system(dd9207e8-fe1e-43a2-ab22-bb4ac860e560): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 07:04:30.410592 containerd[1604]: time="2026-01-20T07:04:30.410270919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 07:04:30.542467 containerd[1604]: time="2026-01-20T07:04:30.542399943Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 07:04:30.544058 containerd[1604]: time="2026-01-20T07:04:30.544029434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 07:04:30.544350 containerd[1604]: time="2026-01-20T07:04:30.544103264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 07:04:30.544717 kubelet[2867]: E0120 07:04:30.544664 2867 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 07:04:30.544937 kubelet[2867]: E0120 07:04:30.544864 2867 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 07:04:30.545296 kubelet[2867]: E0120 07:04:30.545223 2867 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb7xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6jwn_calico-system(dd9207e8-fe1e-43a2-ab22-bb4ac860e560): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 07:04:30.546785 kubelet[2867]: E0120 07:04:30.546735 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:04:31.246345 kubelet[2867]: E0120 07:04:31.246271 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:04:33.942750 systemd[1]: Started sshd@10-172.232.7.121:22-20.161.92.111:47000.service - OpenSSH per-connection server daemon (20.161.92.111:47000). Jan 20 07:04:33.945374 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 07:04:33.945509 kernel: audit: type=1130 audit(1768892673.941:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.232.7.121:22-20.161.92.111:47000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:33.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.232.7.121:22-20.161.92.111:47000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:34.144000 audit[5004]: USER_ACCT pid=5004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:34.155257 kernel: audit: type=1101 audit(1768892674.144:757): pid=5004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:34.155636 sshd[5004]: Accepted publickey for core from 20.161.92.111 port 47000 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:04:34.173535 kernel: audit: type=1103 audit(1768892674.157:758): pid=5004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:34.174224 kernel: audit: type=1006 audit(1768892674.157:759): pid=5004 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 20 07:04:34.157000 audit[5004]: CRED_ACQ pid=5004 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:34.178647 sshd-session[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:04:34.157000 audit[5004]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd36354960 a2=3 a3=0 items=0 ppid=1 pid=5004 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:34.193403 kernel: audit: type=1300 audit(1768892674.157:759): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd36354960 a2=3 a3=0 items=0 ppid=1 pid=5004 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:34.157000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:34.201257 kernel: audit: type=1327 audit(1768892674.157:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:34.206459 systemd-logind[1577]: New session 12 of user core. Jan 20 07:04:34.215431 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 07:04:34.222000 audit[5004]: USER_START pid=5004 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:34.242256 kernel: audit: type=1105 audit(1768892674.222:760): pid=5004 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:34.246218 kubelet[2867]: E0120 07:04:34.245541 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:04:34.245000 audit[5008]: CRED_ACQ pid=5008 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:34.258280 kernel: audit: type=1103 audit(1768892674.245:761): pid=5008 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:34.436530 sshd[5008]: Connection closed by 20.161.92.111 port 47000 Jan 20 07:04:34.437390 sshd-session[5004]: pam_unix(sshd:session): session closed for user core Jan 20 07:04:34.439000 audit[5004]: USER_END pid=5004 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:34.452225 kernel: audit: type=1106 audit(1768892674.439:762): pid=5004 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:34.450000 audit[5004]: CRED_DISP pid=5004 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:34.457564 systemd[1]: sshd@10-172.232.7.121:22-20.161.92.111:47000.service: Deactivated successfully. Jan 20 07:04:34.462654 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 07:04:34.463214 kernel: audit: type=1104 audit(1768892674.450:763): pid=5004 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:34.466522 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Jan 20 07:04:34.469036 systemd-logind[1577]: Removed session 12. Jan 20 07:04:34.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.232.7.121:22-20.161.92.111:47000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:38.249554 kubelet[2867]: E0120 07:04:38.249357 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" podUID="18f52096-e17f-46d5-a51d-1ae5ca49fd14" Jan 20 07:04:38.254068 kubelet[2867]: E0120 07:04:38.254036 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8" Jan 20 07:04:39.252848 kubelet[2867]: E0120 07:04:39.252531 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" podUID="48809565-7ef3-4c36-a2a9-e27dfb3fe63c" Jan 20 07:04:39.487478 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 07:04:39.487643 kernel: audit: type=1130 audit(1768892679.485:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.232.7.121:22-20.161.92.111:47004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:39.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.232.7.121:22-20.161.92.111:47004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:39.486664 systemd[1]: Started sshd@11-172.232.7.121:22-20.161.92.111:47004.service - OpenSSH per-connection server daemon (20.161.92.111:47004). Jan 20 07:04:39.721000 audit[5034]: USER_ACCT pid=5034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:39.732131 kernel: audit: type=1101 audit(1768892679.721:766): pid=5034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:39.732449 sshd[5034]: Accepted publickey for core from 20.161.92.111 port 47004 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:04:39.731000 audit[5034]: CRED_ACQ pid=5034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:39.736653 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:04:39.744383 kernel: audit: type=1103 audit(1768892679.731:767): pid=5034 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:39.744527 kernel: audit: type=1006 audit(1768892679.732:768): pid=5034 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 20 07:04:39.732000 audit[5034]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3a88f040 a2=3 a3=0 items=0 ppid=1 pid=5034 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:39.758315 kernel: audit: type=1300 audit(1768892679.732:768): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3a88f040 a2=3 a3=0 items=0 ppid=1 pid=5034 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:39.764079 systemd-logind[1577]: New session 13 of user core. Jan 20 07:04:39.772762 kernel: audit: type=1327 audit(1768892679.732:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:39.732000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:39.770443 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 07:04:39.777000 audit[5034]: USER_START pid=5034 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:39.790288 kernel: audit: type=1105 audit(1768892679.777:769): pid=5034 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:39.791000 audit[5038]: CRED_ACQ pid=5038 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:39.802236 kernel: audit: type=1103 audit(1768892679.791:770): pid=5038 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:39.989303 sshd[5038]: Connection closed by 20.161.92.111 port 47004 Jan 20 07:04:39.989937 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Jan 20 07:04:39.992000 audit[5034]: USER_END pid=5034 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:39.996000 audit[5034]: CRED_DISP pid=5034 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.007650 kernel: audit: type=1106 audit(1768892679.992:771): pid=5034 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.007731 kernel: audit: type=1104 audit(1768892679.996:772): pid=5034 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.008709 systemd[1]: sshd@11-172.232.7.121:22-20.161.92.111:47004.service: Deactivated successfully. Jan 20 07:04:40.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.232.7.121:22-20.161.92.111:47004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:40.013596 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Jan 20 07:04:40.017412 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 07:04:40.030130 systemd-logind[1577]: Removed session 13. Jan 20 07:04:40.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.232.7.121:22-20.161.92.111:47018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:40.034669 systemd[1]: Started sshd@12-172.232.7.121:22-20.161.92.111:47018.service - OpenSSH per-connection server daemon (20.161.92.111:47018). Jan 20 07:04:40.190000 audit[5051]: USER_ACCT pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.191941 sshd[5051]: Accepted publickey for core from 20.161.92.111 port 47018 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:04:40.193000 audit[5051]: CRED_ACQ pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.193000 audit[5051]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1073d460 a2=3 a3=0 items=0 ppid=1 pid=5051 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:40.193000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:40.196836 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:04:40.204892 systemd-logind[1577]: New session 14 of user core. Jan 20 07:04:40.215499 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 07:04:40.220000 audit[5051]: USER_START pid=5051 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.224000 audit[5055]: CRED_ACQ pid=5055 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.252154 kubelet[2867]: E0120 07:04:40.251792 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" podUID="add8a880-515a-44f3-9fed-8077d26ba5b6" Jan 20 07:04:40.485635 sshd[5055]: Connection closed by 20.161.92.111 port 47018 Jan 20 07:04:40.487487 sshd-session[5051]: pam_unix(sshd:session): session closed for user core Jan 20 07:04:40.490000 audit[5051]: USER_END pid=5051 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.490000 audit[5051]: CRED_DISP pid=5051 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.496828 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Jan 20 07:04:40.497918 systemd[1]: sshd@12-172.232.7.121:22-20.161.92.111:47018.service: Deactivated successfully. Jan 20 07:04:40.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.232.7.121:22-20.161.92.111:47018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:40.503899 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 07:04:40.537277 systemd-logind[1577]: Removed session 14. Jan 20 07:04:40.540825 systemd[1]: Started sshd@13-172.232.7.121:22-20.161.92.111:47024.service - OpenSSH per-connection server daemon (20.161.92.111:47024). Jan 20 07:04:40.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.232.7.121:22-20.161.92.111:47024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:40.726000 audit[5065]: USER_ACCT pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.727713 sshd[5065]: Accepted publickey for core from 20.161.92.111 port 47024 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:04:40.728000 audit[5065]: CRED_ACQ pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.728000 audit[5065]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6cadf8c0 a2=3 a3=0 items=0 ppid=1 pid=5065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:40.728000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:40.732497 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:04:40.742274 systemd-logind[1577]: New session 15 of user core. Jan 20 07:04:40.755393 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 07:04:40.763000 audit[5065]: USER_START pid=5065 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.766000 audit[5076]: CRED_ACQ pid=5076 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.964380 sshd[5076]: Connection closed by 20.161.92.111 port 47024 Jan 20 07:04:40.966465 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Jan 20 07:04:40.969000 audit[5065]: USER_END pid=5065 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.969000 audit[5065]: CRED_DISP pid=5065 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:40.974456 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Jan 20 07:04:40.975879 systemd[1]: sshd@13-172.232.7.121:22-20.161.92.111:47024.service: Deactivated successfully. Jan 20 07:04:40.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.232.7.121:22-20.161.92.111:47024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:40.980977 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 07:04:40.987421 systemd-logind[1577]: Removed session 15. Jan 20 07:04:41.245859 kubelet[2867]: E0120 07:04:41.245779 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:04:42.249114 kubelet[2867]: E0120 07:04:42.248959 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547dfbb977-shb9n" podUID="32ea1474-9105-468a-bde8-3bef92925725" Jan 20 07:04:43.246331 kubelet[2867]: E0120 07:04:43.245903 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:04:44.248098 kubelet[2867]: E0120 07:04:44.247925 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:04:46.009981 systemd[1]: Started sshd@14-172.232.7.121:22-20.161.92.111:45930.service - OpenSSH per-connection server daemon (20.161.92.111:45930). Jan 20 07:04:46.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.232.7.121:22-20.161.92.111:45930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:46.014230 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 20 07:04:46.014393 kernel: audit: type=1130 audit(1768892686.009:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.232.7.121:22-20.161.92.111:45930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:46.213000 audit[5094]: USER_ACCT pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.215156 sshd[5094]: Accepted publickey for core from 20.161.92.111 port 45930 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:04:46.220990 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:04:46.224265 kernel: audit: type=1101 audit(1768892686.213:793): pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.216000 audit[5094]: CRED_ACQ pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.240261 kernel: audit: type=1103 audit(1768892686.216:794): pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.246942 systemd-logind[1577]: New session 16 of user core. Jan 20 07:04:46.257407 kernel: audit: type=1006 audit(1768892686.216:795): pid=5094 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 07:04:46.258596 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 07:04:46.216000 audit[5094]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebf533bd0 a2=3 a3=0 items=0 ppid=1 pid=5094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:46.274259 kernel: audit: type=1300 audit(1768892686.216:795): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebf533bd0 a2=3 a3=0 items=0 ppid=1 pid=5094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:46.216000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:46.277000 audit[5094]: USER_START pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.285129 kernel: audit: type=1327 audit(1768892686.216:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:46.285417 kernel: audit: type=1105 audit(1768892686.277:796): pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.283000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.304281 kernel: audit: type=1103 audit(1768892686.283:797): pid=5098 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.470201 sshd[5098]: Connection closed by 20.161.92.111 port 45930 Jan 20 07:04:46.471111 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Jan 20 07:04:46.473000 audit[5094]: USER_END pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.480981 systemd[1]: sshd@14-172.232.7.121:22-20.161.92.111:45930.service: Deactivated successfully. Jan 20 07:04:46.484691 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 07:04:46.486224 kernel: audit: type=1106 audit(1768892686.473:798): pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.473000 audit[5094]: CRED_DISP pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.487675 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Jan 20 07:04:46.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.232.7.121:22-20.161.92.111:45930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:46.497223 kernel: audit: type=1104 audit(1768892686.473:799): pid=5094 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.508416 systemd-logind[1577]: Removed session 16. Jan 20 07:04:46.511402 systemd[1]: Started sshd@15-172.232.7.121:22-20.161.92.111:45940.service - OpenSSH per-connection server daemon (20.161.92.111:45940). Jan 20 07:04:46.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.232.7.121:22-20.161.92.111:45940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:46.680819 sshd[5110]: Accepted publickey for core from 20.161.92.111 port 45940 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:04:46.679000 audit[5110]: USER_ACCT pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.683000 audit[5110]: CRED_ACQ pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.683000 audit[5110]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0a10e0d0 a2=3 a3=0 items=0 ppid=1 pid=5110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:46.683000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:46.688109 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:04:46.699763 systemd-logind[1577]: New session 17 of user core. Jan 20 07:04:46.713545 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 07:04:46.719000 audit[5110]: USER_START pid=5110 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:46.722000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:47.091897 sshd[5114]: Connection closed by 20.161.92.111 port 45940 Jan 20 07:04:47.097488 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Jan 20 07:04:47.104000 audit[5110]: USER_END pid=5110 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:47.104000 audit[5110]: CRED_DISP pid=5110 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:47.113259 systemd[1]: sshd@15-172.232.7.121:22-20.161.92.111:45940.service: Deactivated successfully. Jan 20 07:04:47.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.232.7.121:22-20.161.92.111:45940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:47.121440 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 07:04:47.154829 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Jan 20 07:04:47.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.232.7.121:22-20.161.92.111:45948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:47.156768 systemd[1]: Started sshd@16-172.232.7.121:22-20.161.92.111:45948.service - OpenSSH per-connection server daemon (20.161.92.111:45948). Jan 20 07:04:47.167417 systemd-logind[1577]: Removed session 17. Jan 20 07:04:47.383000 audit[5124]: USER_ACCT pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:47.384674 sshd[5124]: Accepted publickey for core from 20.161.92.111 port 45948 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:04:47.385000 audit[5124]: CRED_ACQ pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:47.385000 audit[5124]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe95fe0d30 a2=3 a3=0 items=0 ppid=1 pid=5124 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:47.385000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:47.387912 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:04:47.396687 systemd-logind[1577]: New session 18 of user core. Jan 20 07:04:47.406360 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 07:04:47.409000 audit[5124]: USER_START pid=5124 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:47.413000 audit[5128]: CRED_ACQ pid=5128 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:48.224560 sshd[5128]: Connection closed by 20.161.92.111 port 45948 Jan 20 07:04:48.226482 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Jan 20 07:04:48.229000 audit[5124]: USER_END pid=5124 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:48.230000 audit[5124]: CRED_DISP pid=5124 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:48.238666 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Jan 20 07:04:48.239648 systemd[1]: sshd@16-172.232.7.121:22-20.161.92.111:45948.service: Deactivated successfully. Jan 20 07:04:48.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.232.7.121:22-20.161.92.111:45948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:48.246104 kubelet[2867]: E0120 07:04:48.245716 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:04:48.246993 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 07:04:48.268711 systemd-logind[1577]: Removed session 18. Jan 20 07:04:48.272584 systemd[1]: Started sshd@17-172.232.7.121:22-20.161.92.111:45962.service - OpenSSH per-connection server daemon (20.161.92.111:45962). Jan 20 07:04:48.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.232.7.121:22-20.161.92.111:45962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:48.342000 audit[5144]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:04:48.342000 audit[5144]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeffcb6780 a2=0 a3=7ffeffcb676c items=0 ppid=3020 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:48.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:04:48.350000 audit[5144]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:04:48.350000 audit[5144]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeffcb6780 a2=0 a3=7ffeffcb676c items=0 ppid=3020 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:48.350000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:04:48.389000 audit[5147]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5147 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:04:48.389000 audit[5147]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd94ddd040 a2=0 a3=7ffd94ddd02c items=0 ppid=3020 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:48.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:04:48.394000 audit[5147]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5147 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:04:48.394000 audit[5147]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd94ddd040 a2=0 a3=0 items=0 ppid=3020 pid=5147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:48.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:04:48.491000 audit[5142]: USER_ACCT pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:48.492747 sshd[5142]: Accepted publickey for core from 20.161.92.111 port 45962 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:04:48.493000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:48.493000 audit[5142]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedf370280 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:48.493000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:48.496584 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:04:48.507086 systemd-logind[1577]: New session 19 of user core. Jan 20 07:04:48.510733 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 07:04:48.517000 audit[5142]: USER_START pid=5142 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:48.522000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:48.909910 sshd[5149]: Connection closed by 20.161.92.111 port 45962 Jan 20 07:04:48.912107 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Jan 20 07:04:48.914000 audit[5142]: USER_END pid=5142 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:48.914000 audit[5142]: CRED_DISP pid=5142 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:48.927216 systemd[1]: sshd@17-172.232.7.121:22-20.161.92.111:45962.service: Deactivated successfully. Jan 20 07:04:48.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.232.7.121:22-20.161.92.111:45962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:48.935729 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 07:04:48.965454 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Jan 20 07:04:48.967861 systemd[1]: Started sshd@18-172.232.7.121:22-20.161.92.111:45964.service - OpenSSH per-connection server daemon (20.161.92.111:45964). Jan 20 07:04:48.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.232.7.121:22-20.161.92.111:45964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:48.973163 systemd-logind[1577]: Removed session 19. Jan 20 07:04:49.156000 audit[5159]: USER_ACCT pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:49.157965 sshd[5159]: Accepted publickey for core from 20.161.92.111 port 45964 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:04:49.158000 audit[5159]: CRED_ACQ pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:49.158000 audit[5159]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff2fb8720 a2=3 a3=0 items=0 ppid=1 pid=5159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:49.158000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:49.162542 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:04:49.173385 systemd-logind[1577]: New session 20 of user core. Jan 20 07:04:49.180778 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 07:04:49.186000 audit[5159]: USER_START pid=5159 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:49.189000 audit[5163]: CRED_ACQ pid=5163 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:49.250099 kubelet[2867]: E0120 07:04:49.249746 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8" Jan 20 07:04:49.361687 sshd[5163]: Connection closed by 20.161.92.111 port 45964 Jan 20 07:04:49.362387 sshd-session[5159]: pam_unix(sshd:session): session closed for user core Jan 20 07:04:49.366000 audit[5159]: USER_END pid=5159 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:49.367000 audit[5159]: CRED_DISP pid=5159 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:49.372047 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Jan 20 07:04:49.372950 systemd[1]: sshd@18-172.232.7.121:22-20.161.92.111:45964.service: Deactivated successfully. Jan 20 07:04:49.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.232.7.121:22-20.161.92.111:45964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:49.376974 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 07:04:49.382333 systemd-logind[1577]: Removed session 20. Jan 20 07:04:50.248750 kubelet[2867]: E0120 07:04:50.248681 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c4c84c57-fbspp" podUID="48809565-7ef3-4c36-a2a9-e27dfb3fe63c" Jan 20 07:04:51.247779 kubelet[2867]: E0120 07:04:51.246501 2867 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jan 20 07:04:52.256211 kubelet[2867]: E0120 07:04:52.255507 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-n5kdv" podUID="18f52096-e17f-46d5-a51d-1ae5ca49fd14" Jan 20 07:04:53.251761 kubelet[2867]: E0120 07:04:53.250964 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547dfbb977-shb9n" podUID="32ea1474-9105-468a-bde8-3bef92925725" Jan 20 07:04:54.197000 audit[5175]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:04:54.201655 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 20 07:04:54.201752 kernel: audit: type=1325 audit(1768892694.197:841): table=filter:144 family=2 entries=26 op=nft_register_rule pid=5175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:04:54.197000 audit[5175]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc180b8090 a2=0 a3=7ffc180b807c items=0 ppid=3020 pid=5175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:54.197000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:04:54.218887 kernel: audit: type=1300 audit(1768892694.197:841): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc180b8090 a2=0 a3=7ffc180b807c items=0 ppid=3020 pid=5175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:54.218960 kernel: audit: type=1327 audit(1768892694.197:841): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:04:54.224000 audit[5175]: NETFILTER_CFG table=nat:145 family=2 entries=104 op=nft_register_chain pid=5175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:04:54.224000 audit[5175]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc180b8090 a2=0 a3=7ffc180b807c items=0 ppid=3020 pid=5175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:54.234580 kernel: audit: type=1325 audit(1768892694.224:842): table=nat:145 family=2 entries=104 op=nft_register_chain pid=5175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 07:04:54.234633 kernel: audit: type=1300 audit(1768892694.224:842): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc180b8090 a2=0 a3=7ffc180b807c items=0 ppid=3020 pid=5175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:54.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:04:54.252223 kernel: audit: type=1327 audit(1768892694.224:842): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 07:04:54.401719 systemd[1]: Started sshd@19-172.232.7.121:22-20.161.92.111:40692.service - OpenSSH per-connection server daemon (20.161.92.111:40692). Jan 20 07:04:54.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.232.7.121:22-20.161.92.111:40692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:54.410205 kernel: audit: type=1130 audit(1768892694.401:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.232.7.121:22-20.161.92.111:40692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:54.569898 sshd[5177]: Accepted publickey for core from 20.161.92.111 port 40692 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:04:54.568000 audit[5177]: USER_ACCT pid=5177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:54.574224 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:04:54.570000 audit[5177]: CRED_ACQ pid=5177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:54.581021 kernel: audit: type=1101 audit(1768892694.568:844): pid=5177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:54.581123 kernel: audit: type=1103 audit(1768892694.570:845): pid=5177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:54.571000 audit[5177]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc758ac360 a2=3 a3=0 items=0 ppid=1 pid=5177 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:04:54.571000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:04:54.593201 kernel: audit: type=1006 audit(1768892694.571:846): pid=5177 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 20 07:04:54.593169 systemd-logind[1577]: New session 21 of user core. Jan 20 07:04:54.598839 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 07:04:54.605000 audit[5177]: USER_START pid=5177 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:54.609000 audit[5181]: CRED_ACQ pid=5181 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:54.804222 sshd[5181]: Connection closed by 20.161.92.111 port 40692 Jan 20 07:04:54.805147 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Jan 20 07:04:54.809000 audit[5177]: USER_END pid=5177 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:54.809000 audit[5177]: CRED_DISP pid=5177 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:04:54.815901 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Jan 20 07:04:54.818359 systemd[1]: sshd@19-172.232.7.121:22-20.161.92.111:40692.service: Deactivated successfully. Jan 20 07:04:54.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.232.7.121:22-20.161.92.111:40692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:54.827013 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 07:04:54.840845 systemd-logind[1577]: Removed session 21. Jan 20 07:04:55.255874 kubelet[2867]: E0120 07:04:55.255111 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6jwn" podUID="dd9207e8-fe1e-43a2-ab22-bb4ac860e560" Jan 20 07:04:55.255874 kubelet[2867]: E0120 07:04:55.255564 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c495f47-pkkt5" podUID="add8a880-515a-44f3-9fed-8077d26ba5b6" Jan 20 07:04:59.849786 systemd[1]: Started sshd@20-172.232.7.121:22-20.161.92.111:40704.service - OpenSSH per-connection server daemon (20.161.92.111:40704). Jan 20 07:04:59.854775 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 07:04:59.854922 kernel: audit: type=1130 audit(1768892699.849:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.232.7.121:22-20.161.92.111:40704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:04:59.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.232.7.121:22-20.161.92.111:40704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:05:00.051000 audit[5217]: USER_ACCT pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:05:00.061212 kernel: audit: type=1101 audit(1768892700.051:853): pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:05:00.061300 sshd[5217]: Accepted publickey for core from 20.161.92.111 port 40704 ssh2: RSA SHA256:roD1FXLyUFqG8Ndiz5vCMZxla/PvLtBbiqoAZrvTa1Y Jan 20 07:05:00.066886 sshd-session[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 07:05:00.063000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:05:00.081296 kernel: audit: type=1103 audit(1768892700.063:854): pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:05:00.095567 kernel: audit: type=1006 audit(1768892700.063:855): pid=5217 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 20 07:05:00.063000 audit[5217]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd6d94540 a2=3 a3=0 items=0 ppid=1 pid=5217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:05:00.107843 kernel: audit: type=1300 audit(1768892700.063:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd6d94540 a2=3 a3=0 items=0 ppid=1 pid=5217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 07:05:00.100112 systemd-logind[1577]: New session 22 of user core. Jan 20 07:05:00.113280 kernel: audit: type=1327 audit(1768892700.063:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:05:00.063000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 07:05:00.110548 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 07:05:00.131875 kernel: audit: type=1105 audit(1768892700.118:856): pid=5217 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:05:00.118000 audit[5217]: USER_START pid=5217 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:05:00.135000 audit[5221]: CRED_ACQ pid=5221 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:05:00.144480 kernel: audit: type=1103 audit(1768892700.135:857): pid=5221 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:05:00.394790 sshd[5221]: Connection closed by 20.161.92.111 port 40704 Jan 20 07:05:00.396719 sshd-session[5217]: pam_unix(sshd:session): session closed for user core Jan 20 07:05:00.412862 kernel: audit: type=1106 audit(1768892700.399:858): pid=5217 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:05:00.399000 audit[5217]: USER_END pid=5217 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:05:00.409000 audit[5217]: CRED_DISP pid=5217 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:05:00.424261 kernel: audit: type=1104 audit(1768892700.409:859): pid=5217 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 07:05:00.417025 systemd[1]: sshd@20-172.232.7.121:22-20.161.92.111:40704.service: Deactivated successfully. Jan 20 07:05:00.417281 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Jan 20 07:05:00.424420 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 07:05:00.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.232.7.121:22-20.161.92.111:40704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 07:05:00.431207 systemd-logind[1577]: Removed session 22. Jan 20 07:05:02.252220 kubelet[2867]: E0120 07:05:02.249351 2867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8krch" podUID="930ba9b4-4a35-4f62-858d-858957a6d7e8"