Jan 24 00:28:31.139578 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 21:38:55 -00 2026 Jan 24 00:28:31.139605 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:28:31.139615 kernel: BIOS-provided physical RAM map: Jan 24 00:28:31.139622 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 24 00:28:31.139628 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 24 00:28:31.139634 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:28:31.139644 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 24 00:28:31.139650 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 24 00:28:31.139657 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:28:31.139663 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:28:31.139670 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:28:31.139676 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:28:31.139683 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 24 00:28:31.139690 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:28:31.139700 kernel: NX (Execute Disable) protection: active Jan 24 00:28:31.139707 kernel: APIC: Static calls initialized Jan 24 00:28:31.139714 kernel: SMBIOS 2.8 present. Jan 24 00:28:31.139721 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 24 00:28:31.139728 kernel: DMI: Memory slots populated: 1/1 Jan 24 00:28:31.139737 kernel: Hypervisor detected: KVM Jan 24 00:28:31.139744 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 24 00:28:31.139751 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:28:31.139758 kernel: kvm-clock: using sched offset of 6489730823 cycles Jan 24 00:28:31.139766 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:28:31.139773 kernel: tsc: Detected 2000.000 MHz processor Jan 24 00:28:31.139781 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:28:31.139789 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:28:31.139796 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 24 00:28:31.139806 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:28:31.139814 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:28:31.139822 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 24 00:28:31.139829 kernel: Using GB pages for direct mapping Jan 24 00:28:31.139836 kernel: ACPI: Early table checksum verification disabled Jan 24 00:28:31.139844 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 24 00:28:31.139852 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:31.139862 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:31.139870 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:31.139878 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 24 00:28:31.139885 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:31.139893 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:31.139904 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:31.139914 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:28:31.139922 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 24 00:28:31.139930 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 24 00:28:31.139938 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 24 00:28:31.139946 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 24 00:28:31.139956 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 24 00:28:31.139964 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 24 00:28:31.139972 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 24 00:28:31.139980 kernel: No NUMA configuration found Jan 24 00:28:31.139988 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 24 00:28:31.139996 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jan 24 00:28:31.140003 kernel: Zone ranges: Jan 24 00:28:31.140012 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:28:31.140022 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:28:31.140030 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 24 00:28:31.140037 kernel: Device empty Jan 24 00:28:31.140045 kernel: Movable zone start for each node Jan 24 00:28:31.140053 kernel: Early memory node ranges Jan 24 00:28:31.140061 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:28:31.140069 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 24 00:28:31.140079 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 24 00:28:31.140087 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 24 00:28:31.140095 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:28:31.140102 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:28:31.140110 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 24 00:28:31.140118 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:28:31.140126 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:28:31.140134 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:28:31.140144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:28:31.140152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:28:31.140160 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:28:31.140168 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:28:31.140176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:28:31.140184 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:28:31.140192 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:28:31.140202 kernel: TSC deadline timer available Jan 24 00:28:31.140210 kernel: CPU topo: Max. logical packages: 1 Jan 24 00:28:31.140218 kernel: CPU topo: Max. logical dies: 1 Jan 24 00:28:31.140225 kernel: CPU topo: Max. dies per package: 1 Jan 24 00:28:31.140233 kernel: CPU topo: Max. threads per core: 1 Jan 24 00:28:31.140241 kernel: CPU topo: Num. cores per package: 2 Jan 24 00:28:31.140249 kernel: CPU topo: Num. threads per package: 2 Jan 24 00:28:31.140257 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 24 00:28:31.140267 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:28:31.140275 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:28:31.140283 kernel: kvm-guest: setup PV sched yield Jan 24 00:28:31.140291 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:28:31.140299 kernel: Booting paravirtualized kernel on KVM Jan 24 00:28:31.140307 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:28:31.140315 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:28:31.140325 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 24 00:28:31.140333 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 24 00:28:31.140340 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:28:31.140348 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:28:31.140356 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:28:31.140365 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:28:31.140389 kernel: random: crng init done Jan 24 00:28:31.140397 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:28:31.140405 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:28:31.140413 kernel: Fallback order for Node 0: 0 Jan 24 00:28:31.140421 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jan 24 00:28:31.140429 kernel: Policy zone: Normal Jan 24 00:28:31.140437 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:28:31.140445 kernel: software IO TLB: area num 2. Jan 24 00:28:31.140455 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:28:31.140463 kernel: ftrace: allocating 40128 entries in 157 pages Jan 24 00:28:31.140471 kernel: ftrace: allocated 157 pages with 5 groups Jan 24 00:28:31.140479 kernel: Dynamic Preempt: voluntary Jan 24 00:28:31.140487 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:28:31.140496 kernel: rcu: RCU event tracing is enabled. Jan 24 00:28:31.140504 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:28:31.140514 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:28:31.140522 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:28:31.140529 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:28:31.140537 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:28:31.140545 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:28:31.140556 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:28:31.140571 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:28:31.140580 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:28:31.140588 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:28:31.140596 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:28:31.140607 kernel: Console: colour VGA+ 80x25 Jan 24 00:28:31.140616 kernel: printk: legacy console [tty0] enabled Jan 24 00:28:31.140624 kernel: printk: legacy console [ttyS0] enabled Jan 24 00:28:31.140632 kernel: ACPI: Core revision 20240827 Jan 24 00:28:31.140643 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:28:31.140651 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:28:31.140660 kernel: x2apic enabled Jan 24 00:28:31.140668 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:28:31.140676 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:28:31.140689 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:28:31.140701 kernel: kvm-guest: setup PV IPIs Jan 24 00:28:31.140726 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:28:31.140735 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 24 00:28:31.140743 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jan 24 00:28:31.140751 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:28:31.140760 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:28:31.140768 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:28:31.140777 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:28:31.140788 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:28:31.140796 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:28:31.140805 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 24 00:28:31.140813 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 00:28:31.140821 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 00:28:31.140830 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:28:31.140841 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:28:31.140849 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:28:31.140858 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:28:31.140866 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:28:31.140874 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:28:31.140883 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:28:31.140891 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:28:31.140902 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:28:31.140910 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:28:31.140919 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:28:31.140927 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 24 00:28:31.140935 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 24 00:28:31.140944 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:28:31.140952 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:28:31.140962 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 24 00:28:31.140970 kernel: landlock: Up and running. Jan 24 00:28:31.140978 kernel: SELinux: Initializing. Jan 24 00:28:31.140986 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:28:31.140994 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:28:31.141002 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:28:31.141010 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 24 00:28:31.141020 kernel: ... version: 0 Jan 24 00:28:31.141028 kernel: ... bit width: 48 Jan 24 00:28:31.141036 kernel: ... generic registers: 6 Jan 24 00:28:31.141045 kernel: ... value mask: 0000ffffffffffff Jan 24 00:28:31.141053 kernel: ... max period: 00007fffffffffff Jan 24 00:28:31.141060 kernel: ... fixed-purpose events: 0 Jan 24 00:28:31.141069 kernel: ... event mask: 000000000000003f Jan 24 00:28:31.141077 kernel: signal: max sigframe size: 3376 Jan 24 00:28:31.141087 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:28:31.141096 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:28:31.141104 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 24 00:28:31.141112 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:28:31.141120 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:28:31.141128 kernel: .... node #0, CPUs: #1 Jan 24 00:28:31.141136 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:28:31.141147 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jan 24 00:28:31.141155 kernel: Memory: 3977432K/4193772K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 210912K reserved, 0K cma-reserved) Jan 24 00:28:31.141163 kernel: devtmpfs: initialized Jan 24 00:28:31.141171 kernel: x86/mm: Memory block size: 128MB Jan 24 00:28:31.141179 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:28:31.141188 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:28:31.141196 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:28:31.141206 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:28:31.141214 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:28:31.141222 kernel: audit: type=2000 audit(1769214507.860:1): state=initialized audit_enabled=0 res=1 Jan 24 00:28:31.141230 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:28:31.141238 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:28:31.141246 kernel: cpuidle: using governor menu Jan 24 00:28:31.141254 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:28:31.141265 kernel: dca service started, version 1.12.1 Jan 24 00:28:31.141273 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 24 00:28:31.141281 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:28:31.141290 kernel: PCI: Using configuration type 1 for base access Jan 24 00:28:31.141303 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:28:31.141316 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:28:31.141328 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:28:31.141339 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:28:31.141348 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:28:31.141356 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:28:31.141364 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:28:31.141386 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:28:31.141395 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:28:31.141403 kernel: ACPI: Interpreter enabled Jan 24 00:28:31.141413 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:28:31.141422 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:28:31.141430 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:28:31.141438 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:28:31.141446 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:28:31.141454 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:28:31.141695 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:28:31.141891 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:28:31.142075 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:28:31.142086 kernel: PCI host bridge to bus 0000:00 Jan 24 00:28:31.142264 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:28:31.143477 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:28:31.143657 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:28:31.143821 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 24 00:28:31.143983 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:28:31.144153 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 24 00:28:31.144317 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:28:31.144544 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 24 00:28:31.144738 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 24 00:28:31.144915 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 24 00:28:31.145089 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 24 00:28:31.145268 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:28:31.145458 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:28:31.145682 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 24 00:28:31.145926 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jan 24 00:28:31.146149 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 24 00:28:31.146330 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:28:31.147455 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 24 00:28:31.147828 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 24 00:28:31.148110 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 24 00:28:31.149184 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:28:31.149501 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:28:31.149742 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 24 00:28:31.149942 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:28:31.150358 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 24 00:28:31.150586 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jan 24 00:28:31.151467 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jan 24 00:28:31.151829 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 24 00:28:31.152058 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 24 00:28:31.152103 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:28:31.152118 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:28:31.152127 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:28:31.152135 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:28:31.152143 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:28:31.152151 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:28:31.152160 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:28:31.152168 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:28:31.152179 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:28:31.152187 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:28:31.152195 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:28:31.152204 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:28:31.152212 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:28:31.152220 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:28:31.152228 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:28:31.152238 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:28:31.152247 kernel: iommu: Default domain type: Translated Jan 24 00:28:31.152270 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:28:31.152293 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:28:31.152301 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:28:31.152310 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 24 00:28:31.152318 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 24 00:28:31.152562 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:28:31.152747 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:28:31.152920 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:28:31.152931 kernel: vgaarb: loaded Jan 24 00:28:31.152940 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:28:31.152948 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:28:31.152957 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:28:31.152968 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:28:31.152976 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:28:31.152985 kernel: pnp: PnP ACPI init Jan 24 00:28:31.153170 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:28:31.153204 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:28:31.153214 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:28:31.153223 kernel: NET: Registered PF_INET protocol family Jan 24 00:28:31.153234 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:28:31.153243 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:28:31.153251 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:28:31.153260 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:28:31.153268 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:28:31.153276 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:28:31.153284 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:28:31.153295 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:28:31.153304 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:28:31.153312 kernel: NET: Registered PF_XDP protocol family Jan 24 00:28:31.153510 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:28:31.153675 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:28:31.153837 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:28:31.153998 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 24 00:28:31.154164 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:28:31.154325 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 24 00:28:31.154336 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:28:31.154344 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:28:31.154353 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 24 00:28:31.154362 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 24 00:28:31.154392 kernel: Initialise system trusted keyrings Jan 24 00:28:31.154401 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:28:31.154409 kernel: Key type asymmetric registered Jan 24 00:28:31.154417 kernel: Asymmetric key parser 'x509' registered Jan 24 00:28:31.154426 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 24 00:28:31.154435 kernel: io scheduler mq-deadline registered Jan 24 00:28:31.154443 kernel: io scheduler kyber registered Jan 24 00:28:31.154452 kernel: io scheduler bfq registered Jan 24 00:28:31.154463 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:28:31.154472 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:28:31.154481 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:28:31.154489 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:28:31.154498 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:28:31.154507 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:28:31.154515 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:28:31.154525 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:28:31.154534 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:28:31.154721 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 00:28:31.154899 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 00:28:31.155068 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T00:28:29 UTC (1769214509) Jan 24 00:28:31.155236 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:28:31.155251 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:28:31.155260 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:28:31.155268 kernel: Segment Routing with IPv6 Jan 24 00:28:31.155277 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:28:31.155286 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:28:31.155294 kernel: Key type dns_resolver registered Jan 24 00:28:31.155303 kernel: IPI shorthand broadcast: enabled Jan 24 00:28:31.155314 kernel: sched_clock: Marking stable (1868002768, 367926901)->(2342413048, -106483379) Jan 24 00:28:31.155322 kernel: registered taskstats version 1 Jan 24 00:28:31.155331 kernel: Loading compiled-in X.509 certificates Jan 24 00:28:31.155339 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 08600fac738f210e3b32f727339edfe2b1af2e3d' Jan 24 00:28:31.155348 kernel: Demotion targets for Node 0: null Jan 24 00:28:31.155356 kernel: Key type .fscrypt registered Jan 24 00:28:31.155365 kernel: Key type fscrypt-provisioning registered Jan 24 00:28:31.155970 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:28:31.155982 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:28:31.155991 kernel: ima: No architecture policies found Jan 24 00:28:31.155999 kernel: clk: Disabling unused clocks Jan 24 00:28:31.156008 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 24 00:28:31.156016 kernel: Write protecting the kernel read-only data: 47104k Jan 24 00:28:31.156024 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 24 00:28:31.156037 kernel: Run /init as init process Jan 24 00:28:31.156045 kernel: with arguments: Jan 24 00:28:31.156053 kernel: /init Jan 24 00:28:31.156062 kernel: with environment: Jan 24 00:28:31.156070 kernel: HOME=/ Jan 24 00:28:31.156097 kernel: TERM=linux Jan 24 00:28:31.156108 kernel: SCSI subsystem initialized Jan 24 00:28:31.156119 kernel: libata version 3.00 loaded. Jan 24 00:28:31.156313 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:28:31.156326 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:28:31.156529 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 24 00:28:31.156708 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 24 00:28:31.156885 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:28:31.157084 kernel: scsi host0: ahci Jan 24 00:28:31.157281 kernel: scsi host1: ahci Jan 24 00:28:31.158743 kernel: scsi host2: ahci Jan 24 00:28:31.158955 kernel: scsi host3: ahci Jan 24 00:28:31.160012 kernel: scsi host4: ahci Jan 24 00:28:31.160207 kernel: scsi host5: ahci Jan 24 00:28:31.160225 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Jan 24 00:28:31.160234 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Jan 24 00:28:31.160243 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Jan 24 00:28:31.160251 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Jan 24 00:28:31.160260 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Jan 24 00:28:31.160269 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Jan 24 00:28:31.160280 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:28:31.160289 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:28:31.160297 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:28:31.160305 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:28:31.160314 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 00:28:31.160322 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:28:31.161568 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jan 24 00:28:31.161772 kernel: scsi host6: Virtio SCSI HBA Jan 24 00:28:31.162040 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 24 00:28:31.162247 kernel: sd 6:0:0:0: Power-on or device reset occurred Jan 24 00:28:31.162526 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 24 00:28:31.162733 kernel: sd 6:0:0:0: [sda] Write Protect is off Jan 24 00:28:31.162928 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 24 00:28:31.163128 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 00:28:31.163140 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:28:31.163150 kernel: GPT:25804799 != 167739391 Jan 24 00:28:31.163159 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:28:31.163167 kernel: GPT:25804799 != 167739391 Jan 24 00:28:31.163176 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:28:31.163216 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:28:31.164481 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Jan 24 00:28:31.164498 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:28:31.164508 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:28:31.164518 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 24 00:28:31.164527 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 24 00:28:31.164536 kernel: raid6: avx2x4 gen() 34006 MB/s Jan 24 00:28:31.164549 kernel: raid6: avx2x2 gen() 33874 MB/s Jan 24 00:28:31.164558 kernel: raid6: avx2x1 gen() 15959 MB/s Jan 24 00:28:31.164566 kernel: raid6: using algorithm avx2x4 gen() 34006 MB/s Jan 24 00:28:31.164576 kernel: raid6: .... xor() 5459 MB/s, rmw enabled Jan 24 00:28:31.164587 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:28:31.164596 kernel: xor: automatically using best checksumming function avx Jan 24 00:28:31.164605 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:28:31.164614 kernel: BTRFS: device fsid 091bfa4a-922a-4e6e-abc1-a4b74083975f devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (167) Jan 24 00:28:31.164623 kernel: BTRFS info (device dm-0): first mount of filesystem 091bfa4a-922a-4e6e-abc1-a4b74083975f Jan 24 00:28:31.164632 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:28:31.164640 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:28:31.164651 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:28:31.164660 kernel: BTRFS info (device dm-0): enabling free space tree Jan 24 00:28:31.164671 kernel: loop: module loaded Jan 24 00:28:31.164680 kernel: loop0: detected capacity change from 0 to 100560 Jan 24 00:28:31.164689 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:28:31.164699 systemd[1]: Successfully made /usr/ read-only. Jan 24 00:28:31.164711 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 00:28:31.164724 systemd[1]: Detected virtualization kvm. Jan 24 00:28:31.164733 systemd[1]: Detected architecture x86-64. Jan 24 00:28:31.164742 systemd[1]: Running in initrd. Jan 24 00:28:31.164751 systemd[1]: No hostname configured, using default hostname. Jan 24 00:28:31.164760 systemd[1]: Hostname set to . Jan 24 00:28:31.164769 systemd[1]: Initializing machine ID from random generator. Jan 24 00:28:31.164780 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:28:31.164789 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:28:31.164799 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:28:31.164808 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:28:31.164818 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:28:31.164828 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:28:31.164839 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:28:31.164849 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:28:31.164858 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:28:31.164868 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:28:31.164877 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 24 00:28:31.164886 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:28:31.164897 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:28:31.164907 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:28:31.164916 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:28:31.164925 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:28:31.164934 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:28:31.164943 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:28:31.164952 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:28:31.164964 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 24 00:28:31.164973 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:28:31.164983 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:28:31.164992 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:28:31.165001 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:28:31.165011 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:28:31.165020 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:28:31.165032 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:28:31.165041 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:28:31.165051 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 24 00:28:31.165060 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:28:31.165070 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:28:31.165079 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:28:31.165091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:28:31.165100 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:28:31.165109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:28:31.165119 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:28:31.165155 systemd-journald[304]: Collecting audit messages is enabled. Jan 24 00:28:31.165221 kernel: audit: type=1130 audit(1769214511.135:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.165231 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:28:31.165245 systemd-journald[304]: Journal started Jan 24 00:28:31.165263 systemd-journald[304]: Runtime Journal (/run/log/journal/0f42e8f4ea5e4de09a67d3514fd63487) is 8M, max 78.1M, 70.1M free. Jan 24 00:28:31.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.170025 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:28:31.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.183388 kernel: audit: type=1130 audit(1769214511.174:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.184140 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:28:31.197392 kernel: audit: type=1130 audit(1769214511.185:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.197422 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:28:31.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.195498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:28:31.203072 systemd-modules-load[305]: Inserted module 'br_netfilter' Jan 24 00:28:31.289627 kernel: Bridge firewalling registered Jan 24 00:28:31.290505 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:28:31.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.293450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:28:31.311053 kernel: audit: type=1130 audit(1769214511.293:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.311078 kernel: audit: type=1130 audit(1769214511.302:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.301704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:28:31.320070 kernel: audit: type=1130 audit(1769214511.311:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.311120 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:28:31.323502 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:28:31.326756 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:28:31.329043 systemd-tmpfiles[320]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 24 00:28:31.342753 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:28:31.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.352393 kernel: audit: type=1130 audit(1769214511.344:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.360083 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:28:31.371216 kernel: audit: type=1130 audit(1769214511.360:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.371237 kernel: audit: type=1334 audit(1769214511.361:10): prog-id=6 op=LOAD Jan 24 00:28:31.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.361000 audit: BPF prog-id=6 op=LOAD Jan 24 00:28:31.369506 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:28:31.373744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:28:31.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.383405 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:28:31.400784 dracut-cmdline[342]: dracut-109 Jan 24 00:28:31.404896 dracut-cmdline[342]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ccc6714d5701627f00a0daea097f593263f2ea87c850869ae25db66d36e22877 Jan 24 00:28:31.428354 systemd-resolved[340]: Positive Trust Anchors: Jan 24 00:28:31.428370 systemd-resolved[340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:28:31.428395 systemd-resolved[340]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 00:28:31.428423 systemd-resolved[340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:28:31.458089 systemd-resolved[340]: Defaulting to hostname 'linux'. Jan 24 00:28:31.460159 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:28:31.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.461952 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:28:31.505402 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:28:31.520409 kernel: iscsi: registered transport (tcp) Jan 24 00:28:31.544090 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:28:31.544150 kernel: QLogic iSCSI HBA Driver Jan 24 00:28:31.569994 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:28:31.602055 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:28:31.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.605531 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:28:31.655222 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:28:31.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.657641 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:28:31.660507 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:28:31.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.692339 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:28:31.694000 audit: BPF prog-id=7 op=LOAD Jan 24 00:28:31.694000 audit: BPF prog-id=8 op=LOAD Jan 24 00:28:31.696511 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:28:31.724707 systemd-udevd[572]: Using default interface naming scheme 'v257'. Jan 24 00:28:31.738293 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:28:31.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.743806 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:28:31.772109 dracut-pre-trigger[643]: rd.md=0: removing MD RAID activation Jan 24 00:28:31.773781 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:28:31.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.776000 audit: BPF prog-id=9 op=LOAD Jan 24 00:28:31.777905 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:28:31.802591 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:28:31.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.806499 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:28:31.826585 systemd-networkd[685]: lo: Link UP Jan 24 00:28:31.827404 systemd-networkd[685]: lo: Gained carrier Jan 24 00:28:31.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.827994 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:28:31.829514 systemd[1]: Reached target network.target - Network. Jan 24 00:28:31.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:31.904896 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:28:31.908510 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:28:31.989132 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 24 00:28:32.045790 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 24 00:28:32.225905 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 24 00:28:32.236948 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:28:32.241510 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:28:32.243310 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:28:32.253361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:28:32.261273 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 24 00:28:32.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:32.253491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:28:32.259161 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:28:32.316400 disk-uuid[749]: Primary Header is updated. Jan 24 00:28:32.316400 disk-uuid[749]: Secondary Entries is updated. Jan 24 00:28:32.316400 disk-uuid[749]: Secondary Header is updated. Jan 24 00:28:32.331723 kernel: AES CTR mode by8 optimization enabled Jan 24 00:28:32.298152 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:28:32.318484 systemd-networkd[685]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:28:32.318489 systemd-networkd[685]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:28:32.319987 systemd-networkd[685]: eth0: Link UP Jan 24 00:28:32.320201 systemd-networkd[685]: eth0: Gained carrier Jan 24 00:28:32.320213 systemd-networkd[685]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:28:32.445269 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:28:32.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:32.488091 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:28:32.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:32.491386 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:28:32.492545 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:28:32.494515 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:28:32.498623 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:28:32.525258 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:28:32.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.160459 systemd-networkd[685]: eth0: DHCPv4 address 172.234.200.140/24, gateway 172.234.200.1 acquired from 23.40.197.139 Jan 24 00:28:33.367345 disk-uuid[752]: Warning: The kernel is still using the old partition table. Jan 24 00:28:33.367345 disk-uuid[752]: The new table will be used at the next reboot or after you Jan 24 00:28:33.367345 disk-uuid[752]: run partprobe(8) or kpartx(8) Jan 24 00:28:33.367345 disk-uuid[752]: The operation has completed successfully. Jan 24 00:28:33.379141 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:28:33.398611 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 24 00:28:33.398644 kernel: audit: type=1130 audit(1769214513.380:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.398660 kernel: audit: type=1131 audit(1769214513.380:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.379315 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:28:33.383525 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:28:33.429576 systemd-networkd[685]: eth0: Gained IPv6LL Jan 24 00:28:33.439606 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (839) Jan 24 00:28:33.439666 kernel: BTRFS info (device sda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:28:33.443585 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:28:33.453086 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:28:33.453118 kernel: BTRFS info (device sda6): turning on async discard Jan 24 00:28:33.453131 kernel: BTRFS info (device sda6): enabling free space tree Jan 24 00:28:33.465408 kernel: BTRFS info (device sda6): last unmount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:28:33.467012 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:28:33.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.469577 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:28:33.477753 kernel: audit: type=1130 audit(1769214513.467:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.609131 ignition[858]: Ignition 2.24.0 Jan 24 00:28:33.609150 ignition[858]: Stage: fetch-offline Jan 24 00:28:33.609207 ignition[858]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:33.609220 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:28:33.609670 ignition[858]: parsed url from cmdline: "" Jan 24 00:28:33.612733 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:28:33.609676 ignition[858]: no config URL provided Jan 24 00:28:33.624160 kernel: audit: type=1130 audit(1769214513.613:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.616544 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:28:33.609682 ignition[858]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:28:33.609695 ignition[858]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:28:33.609701 ignition[858]: failed to fetch config: resource requires networking Jan 24 00:28:33.609919 ignition[858]: Ignition finished successfully Jan 24 00:28:33.643804 ignition[867]: Ignition 2.24.0 Jan 24 00:28:33.643814 ignition[867]: Stage: fetch Jan 24 00:28:33.644033 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:33.644050 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:28:33.644194 ignition[867]: parsed url from cmdline: "" Jan 24 00:28:33.644199 ignition[867]: no config URL provided Jan 24 00:28:33.644205 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:28:33.644213 ignition[867]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:28:33.644234 ignition[867]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 24 00:28:33.747942 ignition[867]: PUT result: OK Jan 24 00:28:33.748026 ignition[867]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 24 00:28:33.861068 ignition[867]: GET result: OK Jan 24 00:28:33.861924 ignition[867]: parsing config with SHA512: 4b3a148bb33b681577a11875ddfa73a033bfb698537444d2e0adc3b0dfa68d8d291dce7aa3d1a17b3e9e7118d6334345e05e3bf0bd7a85a1f33c25b0c1f9a174 Jan 24 00:28:33.867705 unknown[867]: fetched base config from "system" Jan 24 00:28:33.867716 unknown[867]: fetched base config from "system" Jan 24 00:28:33.867963 ignition[867]: fetch: fetch complete Jan 24 00:28:33.867722 unknown[867]: fetched user config from "akamai" Jan 24 00:28:33.867969 ignition[867]: fetch: fetch passed Jan 24 00:28:33.868013 ignition[867]: Ignition finished successfully Jan 24 00:28:33.874633 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:28:33.883316 kernel: audit: type=1130 audit(1769214513.875:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.878551 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:28:33.910999 ignition[874]: Ignition 2.24.0 Jan 24 00:28:33.911015 ignition[874]: Stage: kargs Jan 24 00:28:33.911161 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:33.911172 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:28:33.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.914690 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:28:33.911906 ignition[874]: kargs: kargs passed Jan 24 00:28:33.923945 kernel: audit: type=1130 audit(1769214513.915:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.917532 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:28:33.911952 ignition[874]: Ignition finished successfully Jan 24 00:28:33.944752 ignition[880]: Ignition 2.24.0 Jan 24 00:28:33.944767 ignition[880]: Stage: disks Jan 24 00:28:33.944908 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:33.944918 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:28:33.947559 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:28:33.956217 kernel: audit: type=1130 audit(1769214513.948:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:33.945639 ignition[880]: disks: disks passed Jan 24 00:28:33.948908 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:28:33.945681 ignition[880]: Ignition finished successfully Jan 24 00:28:33.956997 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:28:33.958439 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:28:33.959993 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:28:33.961405 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:28:33.964026 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:28:34.007550 systemd-fsck[889]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 24 00:28:34.009395 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:28:34.020157 kernel: audit: type=1130 audit(1769214514.010:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:34.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:34.013473 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:28:34.132401 kernel: EXT4-fs (sda9): mounted filesystem 4e30a7d6-83d2-471c-98e0-68a57c0656af r/w with ordered data mode. Quota mode: none. Jan 24 00:28:34.132660 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:28:34.133914 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:28:34.136621 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:28:34.140462 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:28:34.141767 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:28:34.141807 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:28:34.141830 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:28:34.151597 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:28:34.154515 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:28:34.163450 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (897) Jan 24 00:28:34.168861 kernel: BTRFS info (device sda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:28:34.168887 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:28:34.175397 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:28:34.175421 kernel: BTRFS info (device sda6): turning on async discard Jan 24 00:28:34.177566 kernel: BTRFS info (device sda6): enabling free space tree Jan 24 00:28:34.180934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:28:34.331768 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:28:34.340434 kernel: audit: type=1130 audit(1769214514.332:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:34.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:34.335511 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:28:34.343944 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:28:34.356404 kernel: BTRFS info (device sda6): last unmount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:28:34.380883 ignition[994]: INFO : Ignition 2.24.0 Jan 24 00:28:34.380883 ignition[994]: INFO : Stage: mount Jan 24 00:28:34.383912 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:34.383912 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:28:34.383912 ignition[994]: INFO : mount: mount passed Jan 24 00:28:34.383912 ignition[994]: INFO : Ignition finished successfully Jan 24 00:28:34.395611 kernel: audit: type=1130 audit(1769214514.384:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:34.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:34.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:34.381403 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:28:34.384908 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:28:34.396500 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:28:34.425127 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:28:34.428300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:28:34.473433 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1007) Jan 24 00:28:34.478721 kernel: BTRFS info (device sda6): first mount of filesystem 98bf19d7-5744-4291-8d20-e6403ff726cc Jan 24 00:28:34.478797 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:28:34.490151 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:28:34.490241 kernel: BTRFS info (device sda6): turning on async discard Jan 24 00:28:34.490255 kernel: BTRFS info (device sda6): enabling free space tree Jan 24 00:28:34.495964 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:28:34.537174 ignition[1023]: INFO : Ignition 2.24.0 Jan 24 00:28:34.537174 ignition[1023]: INFO : Stage: files Jan 24 00:28:34.539470 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:34.539470 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:28:34.543642 ignition[1023]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:28:34.545022 ignition[1023]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:28:34.545022 ignition[1023]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:28:34.550233 ignition[1023]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:28:34.551888 ignition[1023]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:28:34.551888 ignition[1023]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:28:34.551126 unknown[1023]: wrote ssh authorized keys file for user: core Jan 24 00:28:34.586827 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:28:34.586827 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:28:34.741395 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:28:34.790366 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:28:34.792113 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:28:34.792113 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:28:34.792113 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:28:34.792113 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:28:34.792113 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:28:34.792113 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:28:34.792113 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:28:34.792113 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:28:34.807245 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:28:34.807245 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:28:34.807245 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:28:34.812965 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:28:34.812965 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:28:34.812965 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:28:35.225145 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:28:35.599916 ignition[1023]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:28:35.599916 ignition[1023]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:28:35.603788 ignition[1023]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:28:35.605088 ignition[1023]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:28:35.605088 ignition[1023]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:28:35.605088 ignition[1023]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:28:35.605088 ignition[1023]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:28:35.605088 ignition[1023]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:28:35.605088 ignition[1023]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:28:35.605088 ignition[1023]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:28:35.605088 ignition[1023]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:28:35.605088 ignition[1023]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:28:35.605088 ignition[1023]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:28:35.605088 ignition[1023]: INFO : files: files passed Jan 24 00:28:35.605088 ignition[1023]: INFO : Ignition finished successfully Jan 24 00:28:35.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.607145 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:28:35.611545 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:28:35.621614 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:28:35.627256 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:28:35.635588 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:28:35.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.652421 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:28:35.652421 initrd-setup-root-after-ignition[1056]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:28:35.655307 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:28:35.656162 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:28:35.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.657646 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:28:35.661551 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:28:35.729789 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:28:35.729932 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:28:35.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.732145 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:28:35.733701 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:28:35.736270 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:28:35.737293 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:28:35.777902 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:28:35.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.780676 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:28:35.801992 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:28:35.803343 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:28:35.804214 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:28:35.805239 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:28:35.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.807427 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:28:35.807554 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:28:35.809824 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:28:35.811037 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:28:35.812819 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:28:35.814888 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:28:35.816518 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:28:35.818340 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 24 00:28:35.820072 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:28:35.821880 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:28:35.823701 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:28:35.825659 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:28:35.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.827346 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:28:35.828955 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:28:35.829158 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:28:35.831116 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:28:35.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.832325 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:28:35.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.833883 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:28:35.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.834099 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:28:35.835481 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:28:35.835719 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:28:35.837806 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:28:35.838004 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:28:35.839002 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:28:35.839172 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:28:35.865585 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:28:35.872782 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:28:35.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.874721 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:28:35.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.874892 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:28:35.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.876106 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:28:35.876323 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:28:35.877577 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:28:35.877718 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:28:35.890751 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:28:35.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.890881 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:28:35.901395 ignition[1080]: INFO : Ignition 2.24.0 Jan 24 00:28:35.901395 ignition[1080]: INFO : Stage: umount Jan 24 00:28:35.901395 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:28:35.901395 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:28:35.908533 ignition[1080]: INFO : umount: umount passed Jan 24 00:28:35.908533 ignition[1080]: INFO : Ignition finished successfully Jan 24 00:28:35.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.907005 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:28:35.907764 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:28:35.909807 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:28:35.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.909859 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:28:35.911519 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:28:35.911571 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:28:35.912285 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:28:35.912338 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:28:35.915542 systemd[1]: Stopped target network.target - Network. Jan 24 00:28:35.916528 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:28:35.916584 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:28:35.918590 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:28:35.919903 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:28:35.922550 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:28:35.923816 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:28:35.925438 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:28:35.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.927216 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:28:35.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.927268 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:28:35.928647 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:28:35.928689 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:28:35.930261 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 24 00:28:35.930297 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:28:35.932029 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:28:35.932095 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:28:35.933619 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:28:35.933672 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:28:35.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.935181 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:28:35.936756 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:28:35.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.942561 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:28:35.943617 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:28:35.943737 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:28:35.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.945331 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:28:35.946528 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:28:35.950157 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:28:35.950280 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:28:35.954000 audit: BPF prog-id=6 op=UNLOAD Jan 24 00:28:35.955000 audit: BPF prog-id=9 op=UNLOAD Jan 24 00:28:35.955169 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 24 00:28:35.957041 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:28:35.957089 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:28:35.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.958743 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:28:35.958806 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:28:35.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.961234 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:28:35.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.962830 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:28:35.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.962891 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:28:35.965905 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:28:35.965957 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:28:35.967618 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:28:35.967668 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:28:35.969832 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:28:35.990832 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:28:35.991111 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:28:35.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:35.995691 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:28:35.995783 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:28:35.998836 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:28:35.998888 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:28:36.000608 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:28:36.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:36.000680 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:28:36.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:36.004347 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:28:36.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:36.004473 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:28:36.006284 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:28:36.006416 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:28:36.012206 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:28:36.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:36.013267 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 24 00:28:36.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:36.013340 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:28:36.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:36.014425 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:28:36.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:36.014500 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:28:36.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:36.016568 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:28:36.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:36.016632 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:28:36.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:36.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:36.018759 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:28:36.018824 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:28:36.021308 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:28:36.021362 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:28:36.023260 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:28:36.036476 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:28:36.047669 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:28:36.047782 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:28:36.050209 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:28:36.052503 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:28:36.079422 systemd[1]: Switching root. Jan 24 00:28:36.120455 systemd-journald[304]: Received SIGTERM from PID 1 (systemd). Jan 24 00:28:36.120538 systemd-journald[304]: Journal stopped Jan 24 00:28:37.510249 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:28:37.510286 kernel: SELinux: policy capability open_perms=1 Jan 24 00:28:37.510298 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:28:37.510309 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:28:37.510319 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:28:37.510329 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:28:37.510342 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:28:37.510355 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:28:37.510370 kernel: SELinux: policy capability userspace_initial_context=0 Jan 24 00:28:37.510397 systemd[1]: Successfully loaded SELinux policy in 78.984ms. Jan 24 00:28:37.510410 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.373ms. Jan 24 00:28:37.510425 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 24 00:28:37.510436 systemd[1]: Detected virtualization kvm. Jan 24 00:28:37.510461 systemd[1]: Detected architecture x86-64. Jan 24 00:28:37.510473 systemd[1]: Detected first boot. Jan 24 00:28:37.510487 systemd[1]: Initializing machine ID from random generator. Jan 24 00:28:37.510498 zram_generator::config[1124]: No configuration found. Jan 24 00:28:37.510510 kernel: Guest personality initialized and is inactive Jan 24 00:28:37.510521 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 24 00:28:37.510531 kernel: Initialized host personality Jan 24 00:28:37.510544 kernel: NET: Registered PF_VSOCK protocol family Jan 24 00:28:37.510555 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:28:37.510566 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:28:37.510578 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:28:37.510589 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:28:37.510607 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:28:37.510624 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:28:37.510638 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:28:37.510650 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:28:37.510661 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:28:37.510672 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:28:37.510685 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:28:37.510696 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:28:37.510709 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:28:37.510721 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:28:37.510732 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:28:37.510743 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:28:37.510755 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:28:37.510766 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:28:37.510780 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:28:37.510798 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:28:37.510821 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:28:37.510834 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:28:37.510846 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:28:37.510857 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:28:37.510868 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:28:37.510882 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:28:37.510894 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:28:37.510906 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 24 00:28:37.510917 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:28:37.510928 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:28:37.510939 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:28:37.510953 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:28:37.510964 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 24 00:28:37.510976 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 24 00:28:37.510988 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 24 00:28:37.511001 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:28:37.511015 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 24 00:28:37.511026 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 24 00:28:37.511037 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:28:37.511048 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:28:37.511060 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:28:37.511071 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:28:37.511085 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:28:37.511096 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:28:37.511107 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:28:37.511123 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:28:37.511143 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:28:37.511159 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:28:37.511172 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:28:37.511186 systemd[1]: Reached target machines.target - Containers. Jan 24 00:28:37.511197 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:28:37.511209 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:28:37.511220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:28:37.511232 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:28:37.511243 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:28:37.511257 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:28:37.511268 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:28:37.511281 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:28:37.511292 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:28:37.511309 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:28:37.511328 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:28:37.511340 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:28:37.511359 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:28:37.512242 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:28:37.512262 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:28:37.512274 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:28:37.512286 kernel: fuse: init (API version 7.41) Jan 24 00:28:37.512298 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:28:37.512313 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:28:37.512325 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:28:37.512336 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 24 00:28:37.512347 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:28:37.512360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:28:37.514433 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:28:37.514480 systemd-journald[1204]: Collecting audit messages is enabled. Jan 24 00:28:37.514504 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:28:37.514516 kernel: ACPI: bus type drm_connector registered Jan 24 00:28:37.514530 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:28:37.514542 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:28:37.514553 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:28:37.514581 systemd-journald[1204]: Journal started Jan 24 00:28:37.514601 systemd-journald[1204]: Runtime Journal (/run/log/journal/3451be07b3104b918384732e5f1e965c) is 8M, max 78.1M, 70.1M free. Jan 24 00:28:37.175000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 24 00:28:37.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.399000 audit: BPF prog-id=14 op=UNLOAD Jan 24 00:28:37.399000 audit: BPF prog-id=13 op=UNLOAD Jan 24 00:28:37.404000 audit: BPF prog-id=15 op=LOAD Jan 24 00:28:37.404000 audit: BPF prog-id=16 op=LOAD Jan 24 00:28:37.404000 audit: BPF prog-id=17 op=LOAD Jan 24 00:28:37.495000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 24 00:28:37.495000 audit[1204]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe662868e0 a2=4000 a3=0 items=0 ppid=1 pid=1204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:37.495000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 24 00:28:37.027609 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:28:37.053955 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 24 00:28:37.054727 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:28:37.520412 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:28:37.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.522517 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:28:37.523854 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:28:37.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.525230 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:28:37.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.526641 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:28:37.526946 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:28:37.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.528487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:28:37.528773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:28:37.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.530239 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:28:37.530756 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:28:37.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.532024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:28:37.532304 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:28:37.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.533658 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:28:37.533947 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:28:37.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.535105 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:28:37.535522 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:28:37.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.536746 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:28:37.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.538405 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:28:37.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.540726 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:28:37.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.542156 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 24 00:28:37.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.559741 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:28:37.562200 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 24 00:28:37.566485 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:28:37.570431 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:28:37.572715 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:28:37.572755 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:28:37.576661 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 24 00:28:37.579206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:28:37.579334 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:28:37.582512 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:28:37.589606 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:28:37.591818 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:28:37.595576 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:28:37.596408 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:28:37.600444 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:28:37.606602 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:28:37.613512 systemd-journald[1204]: Time spent on flushing to /var/log/journal/3451be07b3104b918384732e5f1e965c is 70.846ms for 1119 entries. Jan 24 00:28:37.613512 systemd-journald[1204]: System Journal (/var/log/journal/3451be07b3104b918384732e5f1e965c) is 8M, max 588.1M, 580.1M free. Jan 24 00:28:37.724317 systemd-journald[1204]: Received client request to flush runtime journal. Jan 24 00:28:37.724872 kernel: loop1: detected capacity change from 0 to 50784 Jan 24 00:28:37.724914 kernel: loop2: detected capacity change from 0 to 111560 Jan 24 00:28:37.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.610650 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:28:37.617141 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:28:37.617998 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:28:37.649574 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:28:37.652584 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:28:37.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.656644 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 24 00:28:37.676593 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:28:37.699782 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:28:37.718521 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 24 00:28:37.723457 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 24 00:28:37.723470 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 24 00:28:37.729059 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:28:37.737955 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:28:37.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.741974 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:28:37.757388 kernel: loop3: detected capacity change from 0 to 8 Jan 24 00:28:37.774448 kernel: loop4: detected capacity change from 0 to 224512 Jan 24 00:28:37.795519 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:28:37.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.798000 audit: BPF prog-id=18 op=LOAD Jan 24 00:28:37.798000 audit: BPF prog-id=19 op=LOAD Jan 24 00:28:37.798000 audit: BPF prog-id=20 op=LOAD Jan 24 00:28:37.803000 audit: BPF prog-id=21 op=LOAD Jan 24 00:28:37.801534 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 24 00:28:37.804606 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:28:37.807598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:28:37.818195 kernel: loop5: detected capacity change from 0 to 50784 Jan 24 00:28:37.821000 audit: BPF prog-id=22 op=LOAD Jan 24 00:28:37.823000 audit: BPF prog-id=23 op=LOAD Jan 24 00:28:37.823000 audit: BPF prog-id=24 op=LOAD Jan 24 00:28:37.825675 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 24 00:28:37.827000 audit: BPF prog-id=25 op=LOAD Jan 24 00:28:37.828000 audit: BPF prog-id=26 op=LOAD Jan 24 00:28:37.828000 audit: BPF prog-id=27 op=LOAD Jan 24 00:28:37.829832 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:28:37.845433 kernel: loop6: detected capacity change from 0 to 111560 Jan 24 00:28:37.867598 kernel: loop7: detected capacity change from 0 to 8 Jan 24 00:28:37.880407 kernel: loop1: detected capacity change from 0 to 224512 Jan 24 00:28:37.879872 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jan 24 00:28:37.879884 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jan 24 00:28:37.894015 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:28:37.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:37.906094 (sd-merge)[1276]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Jan 24 00:28:37.917074 (sd-merge)[1276]: Merged extensions into '/usr'. Jan 24 00:28:37.928783 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:28:37.928800 systemd[1]: Reloading... Jan 24 00:28:37.962778 systemd-nsresourced[1278]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 24 00:28:38.080688 zram_generator::config[1323]: No configuration found. Jan 24 00:28:38.209622 systemd-oomd[1274]: No swap; memory pressure usage will be degraded Jan 24 00:28:38.219261 systemd-resolved[1275]: Positive Trust Anchors: Jan 24 00:28:38.219276 systemd-resolved[1275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:28:38.219282 systemd-resolved[1275]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 24 00:28:38.219311 systemd-resolved[1275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:28:38.229662 systemd-resolved[1275]: Defaulting to hostname 'linux'. Jan 24 00:28:38.322549 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:28:38.322814 systemd[1]: Reloading finished in 393 ms. Jan 24 00:28:38.353623 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:28:38.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.354562 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 24 00:28:38.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.355655 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 24 00:28:38.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.356593 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:28:38.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.357681 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:28:38.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.358772 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:28:38.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.363730 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:28:38.372705 systemd[1]: Starting ensure-sysext.service... Jan 24 00:28:38.377000 audit: BPF prog-id=8 op=UNLOAD Jan 24 00:28:38.377000 audit: BPF prog-id=7 op=UNLOAD Jan 24 00:28:38.377000 audit: BPF prog-id=28 op=LOAD Jan 24 00:28:38.377000 audit: BPF prog-id=29 op=LOAD Jan 24 00:28:38.376517 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:28:38.385000 audit: BPF prog-id=30 op=LOAD Jan 24 00:28:38.384516 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:28:38.388525 kernel: kauditd_printk_skb: 120 callbacks suppressed Jan 24 00:28:38.388572 kernel: audit: type=1334 audit(1769214518.385:156): prog-id=30 op=LOAD Jan 24 00:28:38.385000 audit: BPF prog-id=25 op=UNLOAD Jan 24 00:28:38.399449 kernel: audit: type=1334 audit(1769214518.385:157): prog-id=25 op=UNLOAD Jan 24 00:28:38.399499 kernel: audit: type=1334 audit(1769214518.385:158): prog-id=31 op=LOAD Jan 24 00:28:38.399528 kernel: audit: type=1334 audit(1769214518.385:159): prog-id=32 op=LOAD Jan 24 00:28:38.385000 audit: BPF prog-id=31 op=LOAD Jan 24 00:28:38.385000 audit: BPF prog-id=32 op=LOAD Jan 24 00:28:38.405245 kernel: audit: type=1334 audit(1769214518.385:160): prog-id=26 op=UNLOAD Jan 24 00:28:38.385000 audit: BPF prog-id=26 op=UNLOAD Jan 24 00:28:38.385000 audit: BPF prog-id=27 op=UNLOAD Jan 24 00:28:38.408511 kernel: audit: type=1334 audit(1769214518.385:161): prog-id=27 op=UNLOAD Jan 24 00:28:38.387000 audit: BPF prog-id=33 op=LOAD Jan 24 00:28:38.387000 audit: BPF prog-id=22 op=UNLOAD Jan 24 00:28:38.412787 kernel: audit: type=1334 audit(1769214518.387:162): prog-id=33 op=LOAD Jan 24 00:28:38.412832 kernel: audit: type=1334 audit(1769214518.387:163): prog-id=22 op=UNLOAD Jan 24 00:28:38.412855 kernel: audit: type=1334 audit(1769214518.387:164): prog-id=34 op=LOAD Jan 24 00:28:38.387000 audit: BPF prog-id=34 op=LOAD Jan 24 00:28:38.414885 kernel: audit: type=1334 audit(1769214518.387:165): prog-id=35 op=LOAD Jan 24 00:28:38.387000 audit: BPF prog-id=35 op=LOAD Jan 24 00:28:38.413025 systemd[1]: Reload requested from client PID 1366 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:28:38.413040 systemd[1]: Reloading... Jan 24 00:28:38.387000 audit: BPF prog-id=23 op=UNLOAD Jan 24 00:28:38.387000 audit: BPF prog-id=24 op=UNLOAD Jan 24 00:28:38.391000 audit: BPF prog-id=36 op=LOAD Jan 24 00:28:38.391000 audit: BPF prog-id=15 op=UNLOAD Jan 24 00:28:38.391000 audit: BPF prog-id=37 op=LOAD Jan 24 00:28:38.391000 audit: BPF prog-id=38 op=LOAD Jan 24 00:28:38.391000 audit: BPF prog-id=16 op=UNLOAD Jan 24 00:28:38.391000 audit: BPF prog-id=17 op=UNLOAD Jan 24 00:28:38.392000 audit: BPF prog-id=39 op=LOAD Jan 24 00:28:38.392000 audit: BPF prog-id=21 op=UNLOAD Jan 24 00:28:38.393000 audit: BPF prog-id=40 op=LOAD Jan 24 00:28:38.393000 audit: BPF prog-id=18 op=UNLOAD Jan 24 00:28:38.393000 audit: BPF prog-id=41 op=LOAD Jan 24 00:28:38.393000 audit: BPF prog-id=42 op=LOAD Jan 24 00:28:38.393000 audit: BPF prog-id=19 op=UNLOAD Jan 24 00:28:38.394000 audit: BPF prog-id=20 op=UNLOAD Jan 24 00:28:38.426752 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 24 00:28:38.427077 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 24 00:28:38.427485 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:28:38.431152 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. Jan 24 00:28:38.431229 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. Jan 24 00:28:38.441444 systemd-tmpfiles[1367]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:28:38.441528 systemd-tmpfiles[1367]: Skipping /boot Jan 24 00:28:38.458114 systemd-udevd[1368]: Using default interface naming scheme 'v257'. Jan 24 00:28:38.461007 systemd-tmpfiles[1367]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:28:38.461955 systemd-tmpfiles[1367]: Skipping /boot Jan 24 00:28:38.525777 zram_generator::config[1414]: No configuration found. Jan 24 00:28:38.688420 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:28:38.696401 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:28:38.744243 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:28:38.744967 systemd[1]: Reloading finished in 331 ms. Jan 24 00:28:38.747482 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:28:38.752461 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:28:38.755299 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:28:38.757472 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:28:38.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.771418 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:28:38.772000 audit: BPF prog-id=43 op=LOAD Jan 24 00:28:38.773000 audit: BPF prog-id=40 op=UNLOAD Jan 24 00:28:38.773000 audit: BPF prog-id=44 op=LOAD Jan 24 00:28:38.773000 audit: BPF prog-id=45 op=LOAD Jan 24 00:28:38.773000 audit: BPF prog-id=41 op=UNLOAD Jan 24 00:28:38.773000 audit: BPF prog-id=42 op=UNLOAD Jan 24 00:28:38.777000 audit: BPF prog-id=46 op=LOAD Jan 24 00:28:38.778000 audit: BPF prog-id=36 op=UNLOAD Jan 24 00:28:38.778000 audit: BPF prog-id=47 op=LOAD Jan 24 00:28:38.779000 audit: BPF prog-id=48 op=LOAD Jan 24 00:28:38.779000 audit: BPF prog-id=37 op=UNLOAD Jan 24 00:28:38.779000 audit: BPF prog-id=38 op=UNLOAD Jan 24 00:28:38.779000 audit: BPF prog-id=49 op=LOAD Jan 24 00:28:38.781000 audit: BPF prog-id=33 op=UNLOAD Jan 24 00:28:38.781000 audit: BPF prog-id=50 op=LOAD Jan 24 00:28:38.781000 audit: BPF prog-id=51 op=LOAD Jan 24 00:28:38.781000 audit: BPF prog-id=34 op=UNLOAD Jan 24 00:28:38.781000 audit: BPF prog-id=35 op=UNLOAD Jan 24 00:28:38.783000 audit: BPF prog-id=52 op=LOAD Jan 24 00:28:38.783000 audit: BPF prog-id=39 op=UNLOAD Jan 24 00:28:38.785000 audit: BPF prog-id=53 op=LOAD Jan 24 00:28:38.785000 audit: BPF prog-id=30 op=UNLOAD Jan 24 00:28:38.785000 audit: BPF prog-id=54 op=LOAD Jan 24 00:28:38.785000 audit: BPF prog-id=55 op=LOAD Jan 24 00:28:38.785000 audit: BPF prog-id=31 op=UNLOAD Jan 24 00:28:38.785000 audit: BPF prog-id=32 op=UNLOAD Jan 24 00:28:38.786000 audit: BPF prog-id=56 op=LOAD Jan 24 00:28:38.786000 audit: BPF prog-id=57 op=LOAD Jan 24 00:28:38.786000 audit: BPF prog-id=28 op=UNLOAD Jan 24 00:28:38.786000 audit: BPF prog-id=29 op=UNLOAD Jan 24 00:28:38.801707 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 00:28:38.806753 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:28:38.813565 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:28:38.818577 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:28:38.821000 audit: BPF prog-id=58 op=LOAD Jan 24 00:28:38.826142 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:28:38.829350 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:28:38.877255 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:28:38.879607 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:28:38.889321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:28:38.895657 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:28:38.913000 audit[1488]: SYSTEM_BOOT pid=1488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.910526 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:28:38.930468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:28:38.931487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:28:38.931875 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:28:38.932492 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:28:38.932584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:28:38.948399 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:28:38.948896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:28:38.949178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:28:38.949879 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:28:38.950056 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:28:38.950171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:28:38.958050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:28:38.958655 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:28:38.966164 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:28:38.967990 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:28:38.968163 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 24 00:28:38.968250 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 24 00:28:38.968357 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:28:38.977210 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:28:38.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.980143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:28:38.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:38.985476 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:28:38.987279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:28:38.987783 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:28:39.001838 systemd[1]: Finished ensure-sysext.service. Jan 24 00:28:39.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:39.007779 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:28:39.008069 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:28:39.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:39.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:39.016000 audit: BPF prog-id=59 op=LOAD Jan 24 00:28:39.015585 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:28:39.015698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:28:39.022589 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:28:39.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:39.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:39.037873 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:28:39.038231 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:28:39.081846 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:28:39.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:39.144000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 24 00:28:39.144000 audit[1529]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe97ddda40 a2=420 a3=0 items=0 ppid=1483 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:39.144000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:28:39.145342 augenrules[1529]: No rules Jan 24 00:28:39.147510 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:28:39.147893 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 00:28:39.168968 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:28:39.172876 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:28:39.174330 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:28:39.178909 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:28:39.211301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:28:39.232824 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:28:39.488776 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:28:39.505344 systemd-networkd[1487]: lo: Link UP Jan 24 00:28:39.505353 systemd-networkd[1487]: lo: Gained carrier Jan 24 00:28:39.513006 systemd-networkd[1487]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:28:39.513016 systemd-networkd[1487]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:28:39.516058 systemd-networkd[1487]: eth0: Link UP Jan 24 00:28:39.517144 systemd-networkd[1487]: eth0: Gained carrier Jan 24 00:28:39.517162 systemd-networkd[1487]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 24 00:28:39.556719 ldconfig[1485]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:28:39.566150 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:28:39.567784 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:28:39.569172 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:28:39.573287 systemd[1]: Reached target network.target - Network. Jan 24 00:28:39.575178 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:28:39.577825 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 24 00:28:39.580519 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:28:39.584671 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:28:39.606545 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:28:39.609158 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 24 00:28:39.610295 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:28:39.611209 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:28:39.612069 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:28:39.613050 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 24 00:28:39.614340 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:28:39.615230 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:28:39.616058 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 24 00:28:39.617041 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 24 00:28:39.617794 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:28:39.618566 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:28:39.618602 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:28:39.619461 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:28:39.621216 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:28:39.628701 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:28:39.631970 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 24 00:28:39.632917 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 24 00:28:39.633780 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 24 00:28:39.637602 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:28:39.638720 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 24 00:28:39.640212 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:28:39.641726 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:28:39.642440 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:28:39.643163 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:28:39.643219 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:28:39.644417 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:28:39.646525 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:28:39.650578 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:28:39.656589 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:28:39.659656 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:28:39.663643 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:28:39.665411 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:28:39.669051 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 24 00:28:39.692057 jq[1559]: false Jan 24 00:28:39.692859 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:28:39.699544 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:28:39.702587 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:28:39.706825 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing passwd entry cache Jan 24 00:28:39.707621 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:28:39.708319 oslogin_cache_refresh[1561]: Refreshing passwd entry cache Jan 24 00:28:39.710544 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting users, quitting Jan 24 00:28:39.710592 oslogin_cache_refresh[1561]: Failure getting users, quitting Jan 24 00:28:39.710666 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 00:28:39.710704 oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 24 00:28:39.710796 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing group entry cache Jan 24 00:28:39.710826 oslogin_cache_refresh[1561]: Refreshing group entry cache Jan 24 00:28:39.711325 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting groups, quitting Jan 24 00:28:39.711390 oslogin_cache_refresh[1561]: Failure getting groups, quitting Jan 24 00:28:39.711443 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 00:28:39.711481 oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 24 00:28:39.714907 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:28:39.716472 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:28:39.716910 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:28:39.717543 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:28:39.720349 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:28:39.725141 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:28:39.727550 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:28:39.727861 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:28:39.728199 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 24 00:28:39.728504 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 24 00:28:39.732970 extend-filesystems[1560]: Found /dev/sda6 Jan 24 00:28:39.765590 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:28:39.766012 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:28:39.780613 extend-filesystems[1560]: Found /dev/sda9 Jan 24 00:28:39.786713 extend-filesystems[1560]: Checking size of /dev/sda9 Jan 24 00:28:39.809075 jq[1573]: true Jan 24 00:28:39.824560 extend-filesystems[1560]: Resized partition /dev/sda9 Jan 24 00:28:39.835560 update_engine[1572]: I20260124 00:28:39.833042 1572 main.cc:92] Flatcar Update Engine starting Jan 24 00:28:39.838693 coreos-metadata[1556]: Jan 24 00:28:39.827 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 24 00:28:39.842518 extend-filesystems[1609]: resize2fs 1.47.3 (8-Jul-2025) Jan 24 00:28:39.858935 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Jan 24 00:28:39.858975 tar[1580]: linux-amd64/LICENSE Jan 24 00:28:39.858975 tar[1580]: linux-amd64/helm Jan 24 00:28:39.872255 jq[1602]: true Jan 24 00:28:39.876919 dbus-daemon[1557]: [system] SELinux support is enabled Jan 24 00:28:39.877214 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:28:39.882189 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:28:39.884348 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:28:39.902600 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:28:39.902777 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:28:39.904354 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:28:39.904395 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:28:39.931491 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:28:39.936009 update_engine[1572]: I20260124 00:28:39.935788 1572 update_check_scheduler.cc:74] Next update check in 7m11s Jan 24 00:28:39.949665 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:28:39.979663 systemd-logind[1571]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 00:28:39.979696 systemd-logind[1571]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:28:39.980447 systemd-logind[1571]: New seat seat0. Jan 24 00:28:39.981270 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:28:40.045969 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:28:40.048460 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:28:40.056258 systemd[1]: Starting sshkeys.service... Jan 24 00:28:40.134316 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:28:40.138181 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:28:40.265552 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Jan 24 00:28:40.288473 systemd-networkd[1487]: eth0: DHCPv4 address 172.234.200.140/24, gateway 172.234.200.1 acquired from 23.40.197.139 Jan 24 00:28:40.289778 extend-filesystems[1609]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 24 00:28:40.289778 extend-filesystems[1609]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 24 00:28:40.289778 extend-filesystems[1609]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Jan 24 00:28:40.289442 dbus-daemon[1557]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1487 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 00:28:40.314040 containerd[1583]: time="2026-01-24T00:28:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 24 00:28:40.291878 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:28:40.319508 extend-filesystems[1560]: Resized filesystem in /dev/sda9 Jan 24 00:28:40.323506 containerd[1583]: time="2026-01-24T00:28:40.316024211Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 24 00:28:40.293288 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:28:40.293721 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Jan 24 00:28:40.306062 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 00:28:40.346243 coreos-metadata[1638]: Jan 24 00:28:40.346 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 24 00:28:40.351071 containerd[1583]: time="2026-01-24T00:28:40.349992624Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.06µs" Jan 24 00:28:40.351071 containerd[1583]: time="2026-01-24T00:28:40.350026414Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 24 00:28:40.351071 containerd[1583]: time="2026-01-24T00:28:40.350069994Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 24 00:28:40.351071 containerd[1583]: time="2026-01-24T00:28:40.350082804Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 24 00:28:40.351071 containerd[1583]: time="2026-01-24T00:28:40.350274934Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 24 00:28:40.351071 containerd[1583]: time="2026-01-24T00:28:40.350296854Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 00:28:40.351071 containerd[1583]: time="2026-01-24T00:28:40.350361294Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 24 00:28:40.352533 containerd[1583]: time="2026-01-24T00:28:40.352497503Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 00:28:40.352795 containerd[1583]: time="2026-01-24T00:28:40.352752222Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 24 00:28:40.352795 containerd[1583]: time="2026-01-24T00:28:40.352776502Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 00:28:40.352795 containerd[1583]: time="2026-01-24T00:28:40.352787612Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 24 00:28:40.352795 containerd[1583]: time="2026-01-24T00:28:40.352795352Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 00:28:40.353008 containerd[1583]: time="2026-01-24T00:28:40.352967502Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 24 00:28:40.353008 containerd[1583]: time="2026-01-24T00:28:40.352990802Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 24 00:28:40.352977 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:28:40.354491 containerd[1583]: time="2026-01-24T00:28:40.353086622Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 24 00:28:40.356021 containerd[1583]: time="2026-01-24T00:28:40.355982371Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 00:28:40.356152 containerd[1583]: time="2026-01-24T00:28:40.356040601Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 24 00:28:40.356152 containerd[1583]: time="2026-01-24T00:28:40.356066141Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 24 00:28:40.356152 containerd[1583]: time="2026-01-24T00:28:40.356110881Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 24 00:28:40.356408 containerd[1583]: time="2026-01-24T00:28:40.356363941Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 24 00:28:40.356498 containerd[1583]: time="2026-01-24T00:28:40.356459831Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:28:40.360164 containerd[1583]: time="2026-01-24T00:28:40.360125629Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 24 00:28:40.360239 containerd[1583]: time="2026-01-24T00:28:40.360180549Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 00:28:40.360278 containerd[1583]: time="2026-01-24T00:28:40.360258339Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 24 00:28:40.360278 containerd[1583]: time="2026-01-24T00:28:40.360270639Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 24 00:28:40.360336 containerd[1583]: time="2026-01-24T00:28:40.360283729Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 24 00:28:40.360336 containerd[1583]: time="2026-01-24T00:28:40.360295929Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 24 00:28:40.360336 containerd[1583]: time="2026-01-24T00:28:40.360305719Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 24 00:28:40.360336 containerd[1583]: time="2026-01-24T00:28:40.360315209Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 24 00:28:40.360336 containerd[1583]: time="2026-01-24T00:28:40.360327119Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 24 00:28:40.360336 containerd[1583]: time="2026-01-24T00:28:40.360339339Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 24 00:28:40.360531 containerd[1583]: time="2026-01-24T00:28:40.360360769Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 24 00:28:40.362441 containerd[1583]: time="2026-01-24T00:28:40.362403678Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 24 00:28:40.362441 containerd[1583]: time="2026-01-24T00:28:40.362433488Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 24 00:28:40.362524 containerd[1583]: time="2026-01-24T00:28:40.362490068Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 24 00:28:40.362657 containerd[1583]: time="2026-01-24T00:28:40.362605947Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 24 00:28:40.362657 containerd[1583]: time="2026-01-24T00:28:40.362635777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 24 00:28:40.362723 containerd[1583]: time="2026-01-24T00:28:40.362659967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 24 00:28:40.362723 containerd[1583]: time="2026-01-24T00:28:40.362677167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 24 00:28:40.362723 containerd[1583]: time="2026-01-24T00:28:40.362687737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 24 00:28:40.362723 containerd[1583]: time="2026-01-24T00:28:40.362697287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 24 00:28:40.362723 containerd[1583]: time="2026-01-24T00:28:40.362708937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 24 00:28:40.362723 containerd[1583]: time="2026-01-24T00:28:40.362720727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 24 00:28:40.362896 containerd[1583]: time="2026-01-24T00:28:40.362735357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 24 00:28:40.362896 containerd[1583]: time="2026-01-24T00:28:40.362745827Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 24 00:28:40.362896 containerd[1583]: time="2026-01-24T00:28:40.362755687Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 24 00:28:40.362896 containerd[1583]: time="2026-01-24T00:28:40.362774967Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 24 00:28:40.362896 containerd[1583]: time="2026-01-24T00:28:40.362814707Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 24 00:28:40.362896 containerd[1583]: time="2026-01-24T00:28:40.362826857Z" level=info msg="Start snapshots syncer" Jan 24 00:28:40.362896 containerd[1583]: time="2026-01-24T00:28:40.362865427Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 24 00:28:40.366399 containerd[1583]: time="2026-01-24T00:28:40.363369657Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 24 00:28:40.366399 containerd[1583]: time="2026-01-24T00:28:40.365487666Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365542236Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365661196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365682626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365693436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365703766Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365714626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365742006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365752996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365762896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365773106Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365801676Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365813166Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 24 00:28:40.366596 containerd[1583]: time="2026-01-24T00:28:40.365821306Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 00:28:40.366841 containerd[1583]: time="2026-01-24T00:28:40.365834386Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 24 00:28:40.366841 containerd[1583]: time="2026-01-24T00:28:40.365842206Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 24 00:28:40.366841 containerd[1583]: time="2026-01-24T00:28:40.365852286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 24 00:28:40.366841 containerd[1583]: time="2026-01-24T00:28:40.365862826Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 24 00:28:40.366841 containerd[1583]: time="2026-01-24T00:28:40.365939376Z" level=info msg="runtime interface created" Jan 24 00:28:40.366841 containerd[1583]: time="2026-01-24T00:28:40.365951076Z" level=info msg="created NRI interface" Jan 24 00:28:40.366841 containerd[1583]: time="2026-01-24T00:28:40.365960066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 24 00:28:40.366841 containerd[1583]: time="2026-01-24T00:28:40.365971926Z" level=info msg="Connect containerd service" Jan 24 00:28:40.366841 containerd[1583]: time="2026-01-24T00:28:40.365990506Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:28:40.369054 containerd[1583]: time="2026-01-24T00:28:40.369014874Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:28:40.405949 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 00:28:40.411232 dbus-daemon[1557]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 00:28:40.412574 dbus-daemon[1557]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.9' (uid=0 pid=1650 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 00:28:40.419525 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 00:28:40.478515 coreos-metadata[1638]: Jan 24 00:28:40.476 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jan 24 00:28:40.512707 sshd_keygen[1610]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:28:40.592167 polkitd[1655]: Started polkitd version 126 Jan 24 00:28:40.593489 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:28:40.601702 containerd[1583]: time="2026-01-24T00:28:40.601647448Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:28:40.601929 containerd[1583]: time="2026-01-24T00:28:40.601912998Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:28:40.602010 containerd[1583]: time="2026-01-24T00:28:40.601992878Z" level=info msg="Start subscribing containerd event" Jan 24 00:28:40.602090 containerd[1583]: time="2026-01-24T00:28:40.602066158Z" level=info msg="Start recovering state" Jan 24 00:28:40.602311 containerd[1583]: time="2026-01-24T00:28:40.602288348Z" level=info msg="Start event monitor" Jan 24 00:28:40.602488 containerd[1583]: time="2026-01-24T00:28:40.602471817Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:28:40.602555 containerd[1583]: time="2026-01-24T00:28:40.602542327Z" level=info msg="Start streaming server" Jan 24 00:28:40.602604 containerd[1583]: time="2026-01-24T00:28:40.602594467Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 24 00:28:40.602647 containerd[1583]: time="2026-01-24T00:28:40.602636067Z" level=info msg="runtime interface starting up..." Jan 24 00:28:40.602684 containerd[1583]: time="2026-01-24T00:28:40.602675377Z" level=info msg="starting plugins..." Jan 24 00:28:40.602737 containerd[1583]: time="2026-01-24T00:28:40.602726657Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 24 00:28:40.602922 containerd[1583]: time="2026-01-24T00:28:40.602900557Z" level=info msg="containerd successfully booted in 0.296569s" Jan 24 00:28:40.604528 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:28:40.607255 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:28:40.611142 polkitd[1655]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 00:28:40.612118 polkitd[1655]: Loading rules from directory /run/polkit-1/rules.d Jan 24 00:28:40.612177 polkitd[1655]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 24 00:28:40.612443 polkitd[1655]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 24 00:28:40.612476 polkitd[1655]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 24 00:28:40.612519 polkitd[1655]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 00:28:40.613098 polkitd[1655]: Finished loading, compiling and executing 2 rules Jan 24 00:28:40.613438 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 00:28:40.616880 dbus-daemon[1557]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 00:28:40.617284 polkitd[1655]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 00:28:40.617886 coreos-metadata[1638]: Jan 24 00:28:40.617 INFO Fetch successful Jan 24 00:28:40.642624 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:28:40.642977 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:28:40.643274 systemd-hostnamed[1650]: Hostname set to <172-234-200-140> (transient) Jan 24 00:28:40.643538 systemd-resolved[1275]: System hostname changed to '172-234-200-140'. Jan 24 00:28:40.650920 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:28:40.658857 update-ssh-keys[1685]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:28:40.659355 tar[1580]: linux-amd64/README.md Jan 24 00:28:40.660585 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:28:40.667298 systemd[1]: Finished sshkeys.service. Jan 24 00:28:40.676082 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:28:40.683911 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:28:40.686840 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:28:40.688739 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:28:40.691422 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:28:40.848191 coreos-metadata[1556]: Jan 24 00:28:40.847 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 24 00:28:40.853552 systemd-networkd[1487]: eth0: Gained IPv6LL Jan 24 00:28:40.854236 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Jan 24 00:28:40.857692 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:28:40.860292 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:28:40.864645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:28:40.868592 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:28:40.897649 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:28:40.980945 coreos-metadata[1556]: Jan 24 00:28:40.980 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jan 24 00:28:41.162123 coreos-metadata[1556]: Jan 24 00:28:41.161 INFO Fetch successful Jan 24 00:28:41.162483 coreos-metadata[1556]: Jan 24 00:28:41.162 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jan 24 00:28:41.422065 coreos-metadata[1556]: Jan 24 00:28:41.421 INFO Fetch successful Jan 24 00:28:41.544495 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:28:41.545704 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Jan 24 00:28:41.547053 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:28:41.828970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:28:41.831954 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:28:41.833595 systemd[1]: Startup finished in 3.012s (kernel) + 5.529s (initrd) + 5.638s (userspace) = 14.180s. Jan 24 00:28:41.836706 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:28:42.373888 kubelet[1737]: E0124 00:28:42.373804 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:28:42.377930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:28:42.378132 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:28:42.378646 systemd[1]: kubelet.service: Consumed 926ms CPU time, 266.2M memory peak. Jan 24 00:28:43.351635 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Jan 24 00:28:43.568612 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:28:43.570892 systemd[1]: Started sshd@0-172.234.200.140:22-68.220.241.50:53014.service - OpenSSH per-connection server daemon (68.220.241.50:53014). Jan 24 00:28:43.751756 sshd[1749]: Accepted publickey for core from 68.220.241.50 port 53014 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:28:43.754086 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:28:43.761336 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:28:43.762550 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:28:43.769141 systemd-logind[1571]: New session 1 of user core. Jan 24 00:28:43.781719 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:28:43.784940 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:28:43.803808 (systemd)[1755]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:28:43.807087 systemd-logind[1571]: New session 2 of user core. Jan 24 00:28:43.933213 systemd[1755]: Queued start job for default target default.target. Jan 24 00:28:43.945354 systemd[1755]: Created slice app.slice - User Application Slice. Jan 24 00:28:43.945624 systemd[1755]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 24 00:28:43.945648 systemd[1755]: Reached target paths.target - Paths. Jan 24 00:28:43.945708 systemd[1755]: Reached target timers.target - Timers. Jan 24 00:28:43.947863 systemd[1755]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:28:43.949462 systemd[1755]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 24 00:28:43.980349 systemd[1755]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 24 00:28:43.980771 systemd[1755]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:28:43.981059 systemd[1755]: Reached target sockets.target - Sockets. Jan 24 00:28:43.981197 systemd[1755]: Reached target basic.target - Basic System. Jan 24 00:28:43.981327 systemd[1755]: Reached target default.target - Main User Target. Jan 24 00:28:43.981480 systemd[1755]: Startup finished in 167ms. Jan 24 00:28:43.981538 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:28:44.000986 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:28:44.089831 systemd[1]: Started sshd@1-172.234.200.140:22-68.220.241.50:53022.service - OpenSSH per-connection server daemon (68.220.241.50:53022). Jan 24 00:28:44.259209 sshd[1769]: Accepted publickey for core from 68.220.241.50 port 53022 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:28:44.260079 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:28:44.267243 systemd-logind[1571]: New session 3 of user core. Jan 24 00:28:44.276567 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:28:44.332808 sshd[1773]: Connection closed by 68.220.241.50 port 53022 Jan 24 00:28:44.334541 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jan 24 00:28:44.339121 systemd[1]: sshd@1-172.234.200.140:22-68.220.241.50:53022.service: Deactivated successfully. Jan 24 00:28:44.341146 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:28:44.342242 systemd-logind[1571]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:28:44.346910 systemd-logind[1571]: Removed session 3. Jan 24 00:28:44.382581 systemd[1]: Started sshd@2-172.234.200.140:22-68.220.241.50:53028.service - OpenSSH per-connection server daemon (68.220.241.50:53028). Jan 24 00:28:44.556144 sshd[1779]: Accepted publickey for core from 68.220.241.50 port 53028 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:28:44.557086 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:28:44.563082 systemd-logind[1571]: New session 4 of user core. Jan 24 00:28:44.567625 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:28:44.625636 sshd[1783]: Connection closed by 68.220.241.50 port 53028 Jan 24 00:28:44.627702 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jan 24 00:28:44.632250 systemd[1]: sshd@2-172.234.200.140:22-68.220.241.50:53028.service: Deactivated successfully. Jan 24 00:28:44.634517 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:28:44.635664 systemd-logind[1571]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:28:44.638103 systemd-logind[1571]: Removed session 4. Jan 24 00:28:44.656827 systemd[1]: Started sshd@3-172.234.200.140:22-68.220.241.50:53034.service - OpenSSH per-connection server daemon (68.220.241.50:53034). Jan 24 00:28:44.806231 sshd[1789]: Accepted publickey for core from 68.220.241.50 port 53034 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:28:44.808621 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:28:44.814443 systemd-logind[1571]: New session 5 of user core. Jan 24 00:28:44.821530 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:28:44.881008 sshd[1793]: Connection closed by 68.220.241.50 port 53034 Jan 24 00:28:44.882580 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Jan 24 00:28:44.892886 systemd[1]: sshd@3-172.234.200.140:22-68.220.241.50:53034.service: Deactivated successfully. Jan 24 00:28:44.895658 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:28:44.896417 systemd-logind[1571]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:28:44.898303 systemd-logind[1571]: Removed session 5. Jan 24 00:28:44.913692 systemd[1]: Started sshd@4-172.234.200.140:22-68.220.241.50:53046.service - OpenSSH per-connection server daemon (68.220.241.50:53046). Jan 24 00:28:45.074540 sshd[1799]: Accepted publickey for core from 68.220.241.50 port 53046 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:28:45.076051 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:28:45.082280 systemd-logind[1571]: New session 6 of user core. Jan 24 00:28:45.089524 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:28:45.139258 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:28:45.140360 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:28:45.151694 sudo[1804]: pam_unix(sudo:session): session closed for user root Jan 24 00:28:45.173148 sshd[1803]: Connection closed by 68.220.241.50 port 53046 Jan 24 00:28:45.174604 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jan 24 00:28:45.180529 systemd[1]: sshd@4-172.234.200.140:22-68.220.241.50:53046.service: Deactivated successfully. Jan 24 00:28:45.183067 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:28:45.184079 systemd-logind[1571]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:28:45.185666 systemd-logind[1571]: Removed session 6. Jan 24 00:28:45.201119 systemd[1]: Started sshd@5-172.234.200.140:22-68.220.241.50:53056.service - OpenSSH per-connection server daemon (68.220.241.50:53056). Jan 24 00:28:45.349415 sshd[1811]: Accepted publickey for core from 68.220.241.50 port 53056 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:28:45.350598 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:28:45.356483 systemd-logind[1571]: New session 7 of user core. Jan 24 00:28:45.365584 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:28:45.402991 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:28:45.403407 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:28:45.407284 sudo[1817]: pam_unix(sudo:session): session closed for user root Jan 24 00:28:45.415130 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 24 00:28:45.415800 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:28:45.425311 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 24 00:28:45.466000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 00:28:45.468536 kernel: kauditd_printk_skb: 65 callbacks suppressed Jan 24 00:28:45.468570 kernel: audit: type=1305 audit(1769214525.466:229): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 24 00:28:45.466000 audit[1841]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe24f07d00 a2=420 a3=0 items=0 ppid=1822 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:45.474684 kernel: audit: type=1300 audit(1769214525.466:229): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe24f07d00 a2=420 a3=0 items=0 ppid=1822 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:45.481343 augenrules[1841]: No rules Jan 24 00:28:45.466000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:28:45.482147 kernel: audit: type=1327 audit(1769214525.466:229): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 24 00:28:45.483224 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:28:45.483546 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 24 00:28:45.485364 sudo[1816]: pam_unix(sudo:session): session closed for user root Jan 24 00:28:45.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.493643 kernel: audit: type=1130 audit(1769214525.485:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.493672 kernel: audit: type=1131 audit(1769214525.485:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.485000 audit[1816]: USER_END pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.500096 kernel: audit: type=1106 audit(1769214525.485:232): pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.485000 audit[1816]: CRED_DISP pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.507560 sshd[1815]: Connection closed by 68.220.241.50 port 53056 Jan 24 00:28:45.512561 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jan 24 00:28:45.513409 kernel: audit: type=1104 audit(1769214525.485:233): pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.514000 audit[1811]: USER_END pid=1811 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:28:45.518411 systemd-logind[1571]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:28:45.519814 systemd[1]: sshd@5-172.234.200.140:22-68.220.241.50:53056.service: Deactivated successfully. Jan 24 00:28:45.522713 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:28:45.528037 kernel: audit: type=1106 audit(1769214525.514:234): pid=1811 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:28:45.528074 kernel: audit: type=1104 audit(1769214525.514:235): pid=1811 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:28:45.514000 audit[1811]: CRED_DISP pid=1811 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:28:45.526161 systemd-logind[1571]: Removed session 7. Jan 24 00:28:45.532404 kernel: audit: type=1131 audit(1769214525.516:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.234.200.140:22-68.220.241.50:53056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.234.200.140:22-68.220.241.50:53056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.234.200.140:22-68.220.241.50:53066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.545269 systemd[1]: Started sshd@6-172.234.200.140:22-68.220.241.50:53066.service - OpenSSH per-connection server daemon (68.220.241.50:53066). Jan 24 00:28:45.698000 audit[1850]: USER_ACCT pid=1850 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:28:45.698881 sshd[1850]: Accepted publickey for core from 68.220.241.50 port 53066 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:28:45.699000 audit[1850]: CRED_ACQ pid=1850 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:28:45.699000 audit[1850]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe35685900 a2=3 a3=0 items=0 ppid=1 pid=1850 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:45.699000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:28:45.700673 sshd-session[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:28:45.706433 systemd-logind[1571]: New session 8 of user core. Jan 24 00:28:45.712659 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:28:45.716000 audit[1850]: USER_START pid=1850 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:28:45.718000 audit[1854]: CRED_ACQ pid=1854 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:28:45.748000 audit[1855]: USER_ACCT pid=1855 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.749237 sudo[1855]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:28:45.749000 audit[1855]: CRED_REFR pid=1855 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:28:45.749628 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:28:45.749000 audit[1855]: USER_START pid=1855 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:28:46.136546 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:28:46.146975 (dockerd)[1874]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:28:46.437540 dockerd[1874]: time="2026-01-24T00:28:46.436525100Z" level=info msg="Starting up" Jan 24 00:28:46.437838 dockerd[1874]: time="2026-01-24T00:28:46.437761349Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 24 00:28:46.452759 dockerd[1874]: time="2026-01-24T00:28:46.452703102Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 24 00:28:46.480647 systemd[1]: var-lib-docker-metacopy\x2dcheck3743474321-merged.mount: Deactivated successfully. Jan 24 00:28:46.507411 dockerd[1874]: time="2026-01-24T00:28:46.507310494Z" level=info msg="Loading containers: start." Jan 24 00:28:46.523419 kernel: Initializing XFRM netlink socket Jan 24 00:28:46.595000 audit[1921]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.595000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc4c7ccdc0 a2=0 a3=0 items=0 ppid=1874 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.595000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:28:46.598000 audit[1923]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.598000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe045b4810 a2=0 a3=0 items=0 ppid=1874 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.598000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:28:46.600000 audit[1925]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.600000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd8066940 a2=0 a3=0 items=0 ppid=1874 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.600000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:28:46.603000 audit[1927]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.603000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd42aa46f0 a2=0 a3=0 items=0 ppid=1874 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.603000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 00:28:46.606000 audit[1929]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.606000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff9193c080 a2=0 a3=0 items=0 ppid=1874 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.606000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 00:28:46.608000 audit[1931]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.608000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe1ccbd480 a2=0 a3=0 items=0 ppid=1874 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.608000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:28:46.611000 audit[1933]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.611000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc520dcbe0 a2=0 a3=0 items=0 ppid=1874 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.611000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:28:46.615000 audit[1935]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.615000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe051cb5c0 a2=0 a3=0 items=0 ppid=1874 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.615000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 00:28:46.649000 audit[1938]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.649000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffdc5ddd230 a2=0 a3=0 items=0 ppid=1874 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 24 00:28:46.652000 audit[1940]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.652000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffec04edf90 a2=0 a3=0 items=0 ppid=1874 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.652000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 00:28:46.656000 audit[1942]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.656000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffff33f44d0 a2=0 a3=0 items=0 ppid=1874 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.656000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:28:46.659000 audit[1944]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.659000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffdeef2ec0 a2=0 a3=0 items=0 ppid=1874 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.659000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:28:46.662000 audit[1946]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.662000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffef32f0870 a2=0 a3=0 items=0 ppid=1874 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.662000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:28:46.709000 audit[1976]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.709000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd2f15a5a0 a2=0 a3=0 items=0 ppid=1874 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 24 00:28:46.712000 audit[1978]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.712000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffef5e63390 a2=0 a3=0 items=0 ppid=1874 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.712000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 24 00:28:46.714000 audit[1980]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.714000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8e8549b0 a2=0 a3=0 items=0 ppid=1874 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.714000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 24 00:28:46.717000 audit[1982]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.717000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefef00210 a2=0 a3=0 items=0 ppid=1874 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.717000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 24 00:28:46.719000 audit[1984]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.719000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffee59b65a0 a2=0 a3=0 items=0 ppid=1874 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.719000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 24 00:28:46.722000 audit[1986]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.722000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffec7f8d6d0 a2=0 a3=0 items=0 ppid=1874 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.722000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:28:46.724000 audit[1988]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.724000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcf8c65c80 a2=0 a3=0 items=0 ppid=1874 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.724000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:28:46.727000 audit[1990]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.727000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe986922f0 a2=0 a3=0 items=0 ppid=1874 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.727000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 24 00:28:46.731000 audit[1992]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.731000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff7a491710 a2=0 a3=0 items=0 ppid=1874 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 24 00:28:46.733000 audit[1994]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.733000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffedc6c6000 a2=0 a3=0 items=0 ppid=1874 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.733000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 24 00:28:46.736000 audit[1996]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.736000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcfd15ae20 a2=0 a3=0 items=0 ppid=1874 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 24 00:28:46.739000 audit[1998]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.739000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff66040820 a2=0 a3=0 items=0 ppid=1874 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.739000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 24 00:28:46.742000 audit[2000]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.742000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffca43a41c0 a2=0 a3=0 items=0 ppid=1874 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 24 00:28:46.748000 audit[2005]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.748000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc723d33c0 a2=0 a3=0 items=0 ppid=1874 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.748000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 00:28:46.751000 audit[2007]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.751000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcd573ad50 a2=0 a3=0 items=0 ppid=1874 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 00:28:46.757000 audit[2009]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.757000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe3660d5d0 a2=0 a3=0 items=0 ppid=1874 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.757000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 00:28:46.760000 audit[2011]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.760000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe858e37d0 a2=0 a3=0 items=0 ppid=1874 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.760000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 24 00:28:46.762000 audit[2013]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.762000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd31838eb0 a2=0 a3=0 items=0 ppid=1874 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.762000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 24 00:28:46.765000 audit[2015]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:28:46.765000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe98576150 a2=0 a3=0 items=0 ppid=1874 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.765000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 24 00:28:46.771098 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Jan 24 00:28:46.772282 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Jan 24 00:28:46.781000 audit[2019]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.781000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fffe59e0cc0 a2=0 a3=0 items=0 ppid=1874 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.781000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 24 00:28:46.786543 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Jan 24 00:28:46.789000 audit[2022]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.789000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcfae272e0 a2=0 a3=0 items=0 ppid=1874 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.789000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 24 00:28:46.801000 audit[2030]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.801000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffcc6ac74c0 a2=0 a3=0 items=0 ppid=1874 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 24 00:28:46.813000 audit[2036]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.813000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffda21fbc40 a2=0 a3=0 items=0 ppid=1874 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.813000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 24 00:28:46.816000 audit[2038]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.816000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd23a0a9e0 a2=0 a3=0 items=0 ppid=1874 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.816000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 24 00:28:46.818000 audit[2040]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.818000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffee367ca40 a2=0 a3=0 items=0 ppid=1874 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.818000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 24 00:28:46.821000 audit[2042]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.821000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe31dd1660 a2=0 a3=0 items=0 ppid=1874 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 24 00:28:46.824000 audit[2044]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:28:46.824000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe8463a2d0 a2=0 a3=0 items=0 ppid=1874 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:28:46.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 24 00:28:46.825232 systemd-networkd[1487]: docker0: Link UP Jan 24 00:28:46.825659 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Jan 24 00:28:46.828595 dockerd[1874]: time="2026-01-24T00:28:46.828553064Z" level=info msg="Loading containers: done." Jan 24 00:28:46.847092 dockerd[1874]: time="2026-01-24T00:28:46.847053054Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:28:46.847336 dockerd[1874]: time="2026-01-24T00:28:46.847122554Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 24 00:28:46.847336 dockerd[1874]: time="2026-01-24T00:28:46.847203064Z" level=info msg="Initializing buildkit" Jan 24 00:28:46.871429 dockerd[1874]: time="2026-01-24T00:28:46.871368062Z" level=info msg="Completed buildkit initialization" Jan 24 00:28:46.876796 dockerd[1874]: time="2026-01-24T00:28:46.876767100Z" level=info msg="Daemon has completed initialization" Jan 24 00:28:46.876865 dockerd[1874]: time="2026-01-24T00:28:46.876819220Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:28:46.877237 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:28:46.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:47.481879 containerd[1583]: time="2026-01-24T00:28:47.481703547Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:28:48.094113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292326299.mount: Deactivated successfully. Jan 24 00:28:49.159317 containerd[1583]: time="2026-01-24T00:28:49.159248898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:49.160878 containerd[1583]: time="2026-01-24T00:28:49.160692797Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 24 00:28:49.162526 containerd[1583]: time="2026-01-24T00:28:49.162485446Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:49.165555 containerd[1583]: time="2026-01-24T00:28:49.165519455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:49.166682 containerd[1583]: time="2026-01-24T00:28:49.166633414Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.684884517s" Jan 24 00:28:49.166769 containerd[1583]: time="2026-01-24T00:28:49.166753544Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:28:49.172710 containerd[1583]: time="2026-01-24T00:28:49.172668451Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:28:50.432326 containerd[1583]: time="2026-01-24T00:28:50.432267961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:50.433446 containerd[1583]: time="2026-01-24T00:28:50.433236761Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 24 00:28:50.434423 containerd[1583]: time="2026-01-24T00:28:50.434390150Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:50.436845 containerd[1583]: time="2026-01-24T00:28:50.436807629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:50.438503 containerd[1583]: time="2026-01-24T00:28:50.438467538Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.265757297s" Jan 24 00:28:50.438503 containerd[1583]: time="2026-01-24T00:28:50.438498578Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:28:50.439166 containerd[1583]: time="2026-01-24T00:28:50.439126418Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:28:51.553305 containerd[1583]: time="2026-01-24T00:28:51.553164611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:51.554528 containerd[1583]: time="2026-01-24T00:28:51.554140300Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 24 00:28:51.554856 containerd[1583]: time="2026-01-24T00:28:51.554819320Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:51.557113 containerd[1583]: time="2026-01-24T00:28:51.557090109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:51.558085 containerd[1583]: time="2026-01-24T00:28:51.558045488Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.11888079s" Jan 24 00:28:51.558169 containerd[1583]: time="2026-01-24T00:28:51.558085388Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:28:51.559772 containerd[1583]: time="2026-01-24T00:28:51.559732998Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:28:52.524278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:28:52.528539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:28:52.676643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4101037796.mount: Deactivated successfully. Jan 24 00:28:52.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:52.734651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:28:52.735582 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 24 00:28:52.735752 kernel: audit: type=1130 audit(1769214532.734:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:52.748801 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:28:52.831405 kubelet[2166]: E0124 00:28:52.831228 2166 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:28:52.842730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:28:52.842955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:28:52.845972 systemd[1]: kubelet.service: Consumed 215ms CPU time, 110.8M memory peak. Jan 24 00:28:52.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:28:52.852401 kernel: audit: type=1131 audit(1769214532.845:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:28:53.189702 containerd[1583]: time="2026-01-24T00:28:53.189297073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:53.190855 containerd[1583]: time="2026-01-24T00:28:53.190808672Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 24 00:28:53.192626 containerd[1583]: time="2026-01-24T00:28:53.191665091Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:53.193757 containerd[1583]: time="2026-01-24T00:28:53.193580040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:53.196131 containerd[1583]: time="2026-01-24T00:28:53.196086649Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.636317691s" Jan 24 00:28:53.196183 containerd[1583]: time="2026-01-24T00:28:53.196136299Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:28:53.197582 containerd[1583]: time="2026-01-24T00:28:53.197550528Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:28:53.704533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011923914.mount: Deactivated successfully. Jan 24 00:28:54.372400 containerd[1583]: time="2026-01-24T00:28:54.372272261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:54.373126 containerd[1583]: time="2026-01-24T00:28:54.373073591Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Jan 24 00:28:54.377288 containerd[1583]: time="2026-01-24T00:28:54.377256058Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:54.381394 containerd[1583]: time="2026-01-24T00:28:54.380835657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:54.381807 containerd[1583]: time="2026-01-24T00:28:54.381778786Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.184105068s" Jan 24 00:28:54.381883 containerd[1583]: time="2026-01-24T00:28:54.381868866Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:28:54.382706 containerd[1583]: time="2026-01-24T00:28:54.382670236Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:28:54.846525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327362820.mount: Deactivated successfully. Jan 24 00:28:54.851085 containerd[1583]: time="2026-01-24T00:28:54.850815572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:28:54.851669 containerd[1583]: time="2026-01-24T00:28:54.851411381Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 24 00:28:54.852609 containerd[1583]: time="2026-01-24T00:28:54.852564271Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:28:54.854496 containerd[1583]: time="2026-01-24T00:28:54.854467890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:28:54.855073 containerd[1583]: time="2026-01-24T00:28:54.854872950Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 472.165834ms" Jan 24 00:28:54.855073 containerd[1583]: time="2026-01-24T00:28:54.854909320Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:28:54.855466 containerd[1583]: time="2026-01-24T00:28:54.855431159Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:28:55.349351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629235782.mount: Deactivated successfully. Jan 24 00:28:56.990587 containerd[1583]: time="2026-01-24T00:28:56.990513251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:56.991815 containerd[1583]: time="2026-01-24T00:28:56.991501441Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 24 00:28:56.992364 containerd[1583]: time="2026-01-24T00:28:56.992337871Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:56.994652 containerd[1583]: time="2026-01-24T00:28:56.994622059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:28:56.995672 containerd[1583]: time="2026-01-24T00:28:56.995649409Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.14018641s" Jan 24 00:28:56.995739 containerd[1583]: time="2026-01-24T00:28:56.995724749Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:28:59.443721 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:28:59.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:59.443903 systemd[1]: kubelet.service: Consumed 215ms CPU time, 110.8M memory peak. Jan 24 00:28:59.451724 kernel: audit: type=1130 audit(1769214539.443:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:59.447413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:28:59.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:59.458409 kernel: audit: type=1131 audit(1769214539.443:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:28:59.484507 systemd[1]: Reload requested from client PID 2311 ('systemctl') (unit session-8.scope)... Jan 24 00:28:59.484734 systemd[1]: Reloading... Jan 24 00:28:59.666632 zram_generator::config[2364]: No configuration found. Jan 24 00:28:59.876916 systemd[1]: Reloading finished in 391 ms. Jan 24 00:28:59.905000 audit: BPF prog-id=67 op=LOAD Jan 24 00:28:59.910443 kernel: audit: type=1334 audit(1769214539.905:291): prog-id=67 op=LOAD Jan 24 00:28:59.905000 audit: BPF prog-id=52 op=UNLOAD Jan 24 00:28:59.905000 audit: BPF prog-id=68 op=LOAD Jan 24 00:28:59.914810 kernel: audit: type=1334 audit(1769214539.905:292): prog-id=52 op=UNLOAD Jan 24 00:28:59.914859 kernel: audit: type=1334 audit(1769214539.905:293): prog-id=68 op=LOAD Jan 24 00:28:59.918398 kernel: audit: type=1334 audit(1769214539.905:294): prog-id=66 op=UNLOAD Jan 24 00:28:59.918453 kernel: audit: type=1334 audit(1769214539.909:295): prog-id=69 op=LOAD Jan 24 00:28:59.905000 audit: BPF prog-id=66 op=UNLOAD Jan 24 00:28:59.909000 audit: BPF prog-id=69 op=LOAD Jan 24 00:28:59.909000 audit: BPF prog-id=70 op=LOAD Jan 24 00:28:59.922592 kernel: audit: type=1334 audit(1769214539.909:296): prog-id=70 op=LOAD Jan 24 00:28:59.922644 kernel: audit: type=1334 audit(1769214539.909:297): prog-id=56 op=UNLOAD Jan 24 00:28:59.909000 audit: BPF prog-id=56 op=UNLOAD Jan 24 00:28:59.928389 kernel: audit: type=1334 audit(1769214539.909:298): prog-id=57 op=UNLOAD Jan 24 00:28:59.909000 audit: BPF prog-id=57 op=UNLOAD Jan 24 00:28:59.910000 audit: BPF prog-id=71 op=LOAD Jan 24 00:28:59.910000 audit: BPF prog-id=43 op=UNLOAD Jan 24 00:28:59.910000 audit: BPF prog-id=72 op=LOAD Jan 24 00:28:59.910000 audit: BPF prog-id=73 op=LOAD Jan 24 00:28:59.910000 audit: BPF prog-id=44 op=UNLOAD Jan 24 00:28:59.910000 audit: BPF prog-id=45 op=UNLOAD Jan 24 00:28:59.912000 audit: BPF prog-id=74 op=LOAD Jan 24 00:28:59.916000 audit: BPF prog-id=46 op=UNLOAD Jan 24 00:28:59.916000 audit: BPF prog-id=75 op=LOAD Jan 24 00:28:59.916000 audit: BPF prog-id=76 op=LOAD Jan 24 00:28:59.916000 audit: BPF prog-id=47 op=UNLOAD Jan 24 00:28:59.916000 audit: BPF prog-id=48 op=UNLOAD Jan 24 00:28:59.917000 audit: BPF prog-id=77 op=LOAD Jan 24 00:28:59.917000 audit: BPF prog-id=59 op=UNLOAD Jan 24 00:28:59.919000 audit: BPF prog-id=78 op=LOAD Jan 24 00:28:59.919000 audit: BPF prog-id=63 op=UNLOAD Jan 24 00:28:59.919000 audit: BPF prog-id=79 op=LOAD Jan 24 00:28:59.919000 audit: BPF prog-id=80 op=LOAD Jan 24 00:28:59.919000 audit: BPF prog-id=64 op=UNLOAD Jan 24 00:28:59.919000 audit: BPF prog-id=65 op=UNLOAD Jan 24 00:28:59.920000 audit: BPF prog-id=81 op=LOAD Jan 24 00:28:59.921000 audit: BPF prog-id=58 op=UNLOAD Jan 24 00:28:59.921000 audit: BPF prog-id=82 op=LOAD Jan 24 00:28:59.922000 audit: BPF prog-id=49 op=UNLOAD Jan 24 00:28:59.922000 audit: BPF prog-id=83 op=LOAD Jan 24 00:28:59.922000 audit: BPF prog-id=84 op=LOAD Jan 24 00:28:59.922000 audit: BPF prog-id=50 op=UNLOAD Jan 24 00:28:59.922000 audit: BPF prog-id=51 op=UNLOAD Jan 24 00:28:59.926000 audit: BPF prog-id=85 op=LOAD Jan 24 00:28:59.926000 audit: BPF prog-id=53 op=UNLOAD Jan 24 00:28:59.926000 audit: BPF prog-id=86 op=LOAD Jan 24 00:28:59.926000 audit: BPF prog-id=87 op=LOAD Jan 24 00:28:59.926000 audit: BPF prog-id=54 op=UNLOAD Jan 24 00:28:59.926000 audit: BPF prog-id=55 op=UNLOAD Jan 24 00:28:59.930000 audit: BPF prog-id=88 op=LOAD Jan 24 00:28:59.930000 audit: BPF prog-id=60 op=UNLOAD Jan 24 00:28:59.930000 audit: BPF prog-id=89 op=LOAD Jan 24 00:28:59.930000 audit: BPF prog-id=90 op=LOAD Jan 24 00:28:59.930000 audit: BPF prog-id=61 op=UNLOAD Jan 24 00:28:59.930000 audit: BPF prog-id=62 op=UNLOAD Jan 24 00:28:59.951105 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:28:59.951225 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:28:59.951609 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:28:59.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 24 00:28:59.951702 systemd[1]: kubelet.service: Consumed 146ms CPU time, 98.5M memory peak. Jan 24 00:28:59.953597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:29:00.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:29:00.141013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:29:00.148891 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:29:00.191450 kubelet[2411]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:29:00.191450 kubelet[2411]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:29:00.191450 kubelet[2411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:29:00.191905 kubelet[2411]: I0124 00:29:00.191501 2411 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:29:00.519121 kubelet[2411]: I0124 00:29:00.519021 2411 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:29:00.519121 kubelet[2411]: I0124 00:29:00.519050 2411 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:29:00.519479 kubelet[2411]: I0124 00:29:00.519322 2411 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:29:00.547534 kubelet[2411]: E0124 00:29:00.547502 2411 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.200.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.200.140:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:29:00.550238 kubelet[2411]: I0124 00:29:00.550122 2411 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:29:00.560673 kubelet[2411]: I0124 00:29:00.560644 2411 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:29:00.564244 kubelet[2411]: I0124 00:29:00.564225 2411 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:29:00.564518 kubelet[2411]: I0124 00:29:00.564491 2411 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:29:00.564669 kubelet[2411]: I0124 00:29:00.564518 2411 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-200-140","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:29:00.564776 kubelet[2411]: I0124 00:29:00.564675 2411 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:29:00.564776 kubelet[2411]: I0124 00:29:00.564685 2411 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:29:00.564817 kubelet[2411]: I0124 00:29:00.564810 2411 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:29:00.568145 kubelet[2411]: I0124 00:29:00.568042 2411 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:29:00.568145 kubelet[2411]: I0124 00:29:00.568070 2411 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:29:00.568145 kubelet[2411]: I0124 00:29:00.568090 2411 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:29:00.568145 kubelet[2411]: I0124 00:29:00.568100 2411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:29:00.575169 kubelet[2411]: I0124 00:29:00.574208 2411 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 00:29:00.575169 kubelet[2411]: I0124 00:29:00.574561 2411 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:29:00.575169 kubelet[2411]: W0124 00:29:00.574610 2411 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:29:00.576281 kubelet[2411]: I0124 00:29:00.576266 2411 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:29:00.576364 kubelet[2411]: I0124 00:29:00.576353 2411 server.go:1287] "Started kubelet" Jan 24 00:29:00.576561 kubelet[2411]: W0124 00:29:00.576529 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.200.140:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-200-140&limit=500&resourceVersion=0": dial tcp 172.234.200.140:6443: connect: connection refused Jan 24 00:29:00.576662 kubelet[2411]: E0124 00:29:00.576625 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.200.140:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-200-140&limit=500&resourceVersion=0\": dial tcp 172.234.200.140:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:29:00.577752 kubelet[2411]: I0124 00:29:00.577729 2411 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:29:00.578627 kubelet[2411]: I0124 00:29:00.578612 2411 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:29:00.579315 kubelet[2411]: W0124 00:29:00.579281 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.200.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.200.140:6443: connect: connection refused Jan 24 00:29:00.579364 kubelet[2411]: E0124 00:29:00.579318 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.200.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.200.140:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:29:00.579498 kubelet[2411]: I0124 00:29:00.579457 2411 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:29:00.579718 kubelet[2411]: I0124 00:29:00.579692 2411 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:29:00.581310 kubelet[2411]: I0124 00:29:00.581284 2411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:29:00.583229 kubelet[2411]: I0124 00:29:00.583202 2411 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:29:00.585197 kubelet[2411]: E0124 00:29:00.584202 2411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.200.140:6443/api/v1/namespaces/default/events\": dial tcp 172.234.200.140:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-200-140.188d83403e822f4a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-200-140,UID:172-234-200-140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-200-140,},FirstTimestamp:2026-01-24 00:29:00.576337738 +0000 UTC m=+0.422841289,LastTimestamp:2026-01-24 00:29:00.576337738 +0000 UTC m=+0.422841289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-200-140,}" Jan 24 00:29:00.585547 kubelet[2411]: I0124 00:29:00.585525 2411 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:29:00.587029 kubelet[2411]: E0124 00:29:00.585719 2411 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-200-140\" not found" Jan 24 00:29:00.587029 kubelet[2411]: E0124 00:29:00.586553 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.200.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-200-140?timeout=10s\": dial tcp 172.234.200.140:6443: connect: connection refused" interval="200ms" Jan 24 00:29:00.587029 kubelet[2411]: I0124 00:29:00.586584 2411 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:29:00.587029 kubelet[2411]: W0124 00:29:00.586929 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.200.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.200.140:6443: connect: connection refused Jan 24 00:29:00.587029 kubelet[2411]: E0124 00:29:00.586959 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.200.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.200.140:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:29:00.587029 kubelet[2411]: I0124 00:29:00.587002 2411 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:29:00.587782 kubelet[2411]: I0124 00:29:00.587758 2411 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:29:00.587842 kubelet[2411]: I0124 00:29:00.587826 2411 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:29:00.588000 audit[2423]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:00.588000 audit[2423]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdec443f60 a2=0 a3=0 items=0 ppid=2411 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:00.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:29:00.589404 kubelet[2411]: I0124 00:29:00.589258 2411 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:29:00.590000 audit[2424]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:00.590000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd348c4e40 a2=0 a3=0 items=0 ppid=2411 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:00.590000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:29:00.592000 audit[2426]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:00.592000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffea8489d80 a2=0 a3=0 items=0 ppid=2411 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:00.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:29:00.601000 audit[2428]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:00.601000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffca5dbaf80 a2=0 a3=0 items=0 ppid=2411 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:00.601000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:29:00.613000 audit[2433]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:00.613000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff32f415c0 a2=0 a3=0 items=0 ppid=2411 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:00.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 24 00:29:00.615397 kubelet[2411]: E0124 00:29:00.613973 2411 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:29:00.615397 kubelet[2411]: I0124 00:29:00.614828 2411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:29:00.615000 audit[2435]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:00.615000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeae1048c0 a2=0 a3=0 items=0 ppid=2411 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:00.615000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 24 00:29:00.616131 kubelet[2411]: I0124 00:29:00.616112 2411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:29:00.616166 kubelet[2411]: I0124 00:29:00.616133 2411 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:29:00.616166 kubelet[2411]: I0124 00:29:00.616152 2411 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:29:00.616166 kubelet[2411]: I0124 00:29:00.616159 2411 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:29:00.616231 kubelet[2411]: E0124 00:29:00.616203 2411 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:29:00.616000 audit[2436]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:00.616000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3cbc85c0 a2=0 a3=0 items=0 ppid=2411 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:00.616000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 00:29:00.618000 audit[2437]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:00.618000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff673ee660 a2=0 a3=0 items=0 ppid=2411 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:00.618000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 00:29:00.619000 audit[2438]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:00.619000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf8e9ad10 a2=0 a3=0 items=0 ppid=2411 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:00.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 00:29:00.620703 kubelet[2411]: I0124 00:29:00.620690 2411 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:29:00.620786 kubelet[2411]: I0124 00:29:00.620773 2411 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:29:00.620848 kubelet[2411]: I0124 00:29:00.620838 2411 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:29:00.620000 audit[2439]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:00.620000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca4edbc00 a2=0 a3=0 items=0 ppid=2411 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:00.620000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 24 00:29:00.622000 audit[2440]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:00.622000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff07766470 a2=0 a3=0 items=0 ppid=2411 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:00.622000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 24 00:29:00.623128 kubelet[2411]: I0124 00:29:00.622924 2411 policy_none.go:49] "None policy: Start" Jan 24 00:29:00.623128 kubelet[2411]: I0124 00:29:00.622942 2411 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:29:00.623128 kubelet[2411]: I0124 00:29:00.622953 2411 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:29:00.623000 audit[2441]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:00.623000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7e193590 a2=0 a3=0 items=0 ppid=2411 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:00.623000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 24 00:29:00.624400 kubelet[2411]: W0124 00:29:00.624343 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.200.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.200.140:6443: connect: connection refused Jan 24 00:29:00.624400 kubelet[2411]: E0124 00:29:00.624392 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.200.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.200.140:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:29:00.629911 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:29:00.641267 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:29:00.645675 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:29:00.653674 kubelet[2411]: I0124 00:29:00.653501 2411 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:29:00.654152 kubelet[2411]: I0124 00:29:00.654066 2411 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:29:00.654852 kubelet[2411]: I0124 00:29:00.654087 2411 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:29:00.654852 kubelet[2411]: I0124 00:29:00.654773 2411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:29:00.656000 kubelet[2411]: E0124 00:29:00.655986 2411 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:29:00.656095 kubelet[2411]: E0124 00:29:00.656084 2411 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-200-140\" not found" Jan 24 00:29:00.726815 systemd[1]: Created slice kubepods-burstable-podbee5a78f536f63bf489787fd6be95d47.slice - libcontainer container kubepods-burstable-podbee5a78f536f63bf489787fd6be95d47.slice. Jan 24 00:29:00.737229 kubelet[2411]: E0124 00:29:00.737193 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-140\" not found" node="172-234-200-140" Jan 24 00:29:00.739685 systemd[1]: Created slice kubepods-burstable-pod4f6998f0fbe2ee19682d62c3879de96a.slice - libcontainer container kubepods-burstable-pod4f6998f0fbe2ee19682d62c3879de96a.slice. Jan 24 00:29:00.751597 kubelet[2411]: E0124 00:29:00.751580 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-140\" not found" node="172-234-200-140" Jan 24 00:29:00.754111 systemd[1]: Created slice kubepods-burstable-pod814956290f6db45816faf08cf94d87b3.slice - libcontainer container kubepods-burstable-pod814956290f6db45816faf08cf94d87b3.slice. Jan 24 00:29:00.756832 kubelet[2411]: I0124 00:29:00.756792 2411 kubelet_node_status.go:75] "Attempting to register node" node="172-234-200-140" Jan 24 00:29:00.757394 kubelet[2411]: E0124 00:29:00.757353 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.200.140:6443/api/v1/nodes\": dial tcp 172.234.200.140:6443: connect: connection refused" node="172-234-200-140" Jan 24 00:29:00.757606 kubelet[2411]: E0124 00:29:00.757590 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-140\" not found" node="172-234-200-140" Jan 24 00:29:00.787312 kubelet[2411]: E0124 00:29:00.787288 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.200.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-200-140?timeout=10s\": dial tcp 172.234.200.140:6443: connect: connection refused" interval="400ms" Jan 24 00:29:00.788383 kubelet[2411]: I0124 00:29:00.788358 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bee5a78f536f63bf489787fd6be95d47-k8s-certs\") pod \"kube-apiserver-172-234-200-140\" (UID: \"bee5a78f536f63bf489787fd6be95d47\") " pod="kube-system/kube-apiserver-172-234-200-140" Jan 24 00:29:00.788432 kubelet[2411]: I0124 00:29:00.788414 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bee5a78f536f63bf489787fd6be95d47-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-200-140\" (UID: \"bee5a78f536f63bf489787fd6be95d47\") " pod="kube-system/kube-apiserver-172-234-200-140" Jan 24 00:29:00.788469 kubelet[2411]: I0124 00:29:00.788442 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f6998f0fbe2ee19682d62c3879de96a-kubeconfig\") pod \"kube-controller-manager-172-234-200-140\" (UID: \"4f6998f0fbe2ee19682d62c3879de96a\") " pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:00.788495 kubelet[2411]: I0124 00:29:00.788485 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/814956290f6db45816faf08cf94d87b3-kubeconfig\") pod \"kube-scheduler-172-234-200-140\" (UID: \"814956290f6db45816faf08cf94d87b3\") " pod="kube-system/kube-scheduler-172-234-200-140" Jan 24 00:29:00.788533 kubelet[2411]: I0124 00:29:00.788512 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f6998f0fbe2ee19682d62c3879de96a-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-200-140\" (UID: \"4f6998f0fbe2ee19682d62c3879de96a\") " pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:00.788566 kubelet[2411]: I0124 00:29:00.788543 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bee5a78f536f63bf489787fd6be95d47-ca-certs\") pod \"kube-apiserver-172-234-200-140\" (UID: \"bee5a78f536f63bf489787fd6be95d47\") " pod="kube-system/kube-apiserver-172-234-200-140" Jan 24 00:29:00.788566 kubelet[2411]: I0124 00:29:00.788560 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f6998f0fbe2ee19682d62c3879de96a-ca-certs\") pod \"kube-controller-manager-172-234-200-140\" (UID: \"4f6998f0fbe2ee19682d62c3879de96a\") " pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:00.788664 kubelet[2411]: I0124 00:29:00.788576 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f6998f0fbe2ee19682d62c3879de96a-flexvolume-dir\") pod \"kube-controller-manager-172-234-200-140\" (UID: \"4f6998f0fbe2ee19682d62c3879de96a\") " pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:00.788664 kubelet[2411]: I0124 00:29:00.788592 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f6998f0fbe2ee19682d62c3879de96a-k8s-certs\") pod \"kube-controller-manager-172-234-200-140\" (UID: \"4f6998f0fbe2ee19682d62c3879de96a\") " pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:00.960307 kubelet[2411]: I0124 00:29:00.960262 2411 kubelet_node_status.go:75] "Attempting to register node" node="172-234-200-140" Jan 24 00:29:00.960730 kubelet[2411]: E0124 00:29:00.960702 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.200.140:6443/api/v1/nodes\": dial tcp 172.234.200.140:6443: connect: connection refused" node="172-234-200-140" Jan 24 00:29:01.038974 kubelet[2411]: E0124 00:29:01.038527 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:01.039501 containerd[1583]: time="2026-01-24T00:29:01.039470847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-200-140,Uid:bee5a78f536f63bf489787fd6be95d47,Namespace:kube-system,Attempt:0,}" Jan 24 00:29:01.052861 kubelet[2411]: E0124 00:29:01.052046 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:01.053278 containerd[1583]: time="2026-01-24T00:29:01.053233650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-200-140,Uid:4f6998f0fbe2ee19682d62c3879de96a,Namespace:kube-system,Attempt:0,}" Jan 24 00:29:01.059033 kubelet[2411]: E0124 00:29:01.058816 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:01.059832 containerd[1583]: time="2026-01-24T00:29:01.059807826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-200-140,Uid:814956290f6db45816faf08cf94d87b3,Namespace:kube-system,Attempt:0,}" Jan 24 00:29:01.063255 containerd[1583]: time="2026-01-24T00:29:01.063155745Z" level=info msg="connecting to shim 6aaf5050e85692fdd9444d306c44e52c60d1bfba99f962e56f559f98e71a8664" address="unix:///run/containerd/s/03ecc92b419507635ccd137383b6304d51f0abf41e53044c3d584bec3b8279e5" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:01.092660 systemd[1]: Started cri-containerd-6aaf5050e85692fdd9444d306c44e52c60d1bfba99f962e56f559f98e71a8664.scope - libcontainer container 6aaf5050e85692fdd9444d306c44e52c60d1bfba99f962e56f559f98e71a8664. Jan 24 00:29:01.107754 containerd[1583]: time="2026-01-24T00:29:01.107715962Z" level=info msg="connecting to shim ba16c8e492af8ade00b5ab6df3e89035f484e800c3c571f636c2fc279635b174" address="unix:///run/containerd/s/3e5f3e3979f0ed327fec353e73a7bc19bea7588d3c7f61192a15cacfd13e269a" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:01.111715 containerd[1583]: time="2026-01-24T00:29:01.111679960Z" level=info msg="connecting to shim 3ab8e2941ad4bd50044d1eac7fa1eac49c665ab9584f8ad446ae856f919c6611" address="unix:///run/containerd/s/51a07a4f9c800ad7ee5341455fd93b684be6ced210fcde58b7055abb98da36db" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:01.130000 audit: BPF prog-id=91 op=LOAD Jan 24 00:29:01.131000 audit: BPF prog-id=92 op=LOAD Jan 24 00:29:01.131000 audit[2462]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616635303530653835363932666464393434346433303663343465 Jan 24 00:29:01.131000 audit: BPF prog-id=92 op=UNLOAD Jan 24 00:29:01.131000 audit[2462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616635303530653835363932666464393434346433303663343465 Jan 24 00:29:01.131000 audit: BPF prog-id=93 op=LOAD Jan 24 00:29:01.131000 audit[2462]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616635303530653835363932666464393434346433303663343465 Jan 24 00:29:01.131000 audit: BPF prog-id=94 op=LOAD Jan 24 00:29:01.131000 audit[2462]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616635303530653835363932666464393434346433303663343465 Jan 24 00:29:01.131000 audit: BPF prog-id=94 op=UNLOAD Jan 24 00:29:01.131000 audit[2462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616635303530653835363932666464393434346433303663343465 Jan 24 00:29:01.131000 audit: BPF prog-id=93 op=UNLOAD Jan 24 00:29:01.131000 audit[2462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616635303530653835363932666464393434346433303663343465 Jan 24 00:29:01.131000 audit: BPF prog-id=95 op=LOAD Jan 24 00:29:01.131000 audit[2462]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2452 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661616635303530653835363932666464393434346433303663343465 Jan 24 00:29:01.149669 systemd[1]: Started cri-containerd-3ab8e2941ad4bd50044d1eac7fa1eac49c665ab9584f8ad446ae856f919c6611.scope - libcontainer container 3ab8e2941ad4bd50044d1eac7fa1eac49c665ab9584f8ad446ae856f919c6611. Jan 24 00:29:01.164738 systemd[1]: Started cri-containerd-ba16c8e492af8ade00b5ab6df3e89035f484e800c3c571f636c2fc279635b174.scope - libcontainer container ba16c8e492af8ade00b5ab6df3e89035f484e800c3c571f636c2fc279635b174. Jan 24 00:29:01.175000 audit: BPF prog-id=96 op=LOAD Jan 24 00:29:01.177000 audit: BPF prog-id=97 op=LOAD Jan 24 00:29:01.177000 audit[2515]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2497 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623865323934316164346264353030343464316561633766613165 Jan 24 00:29:01.178000 audit: BPF prog-id=97 op=UNLOAD Jan 24 00:29:01.178000 audit[2515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2497 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623865323934316164346264353030343464316561633766613165 Jan 24 00:29:01.178000 audit: BPF prog-id=98 op=LOAD Jan 24 00:29:01.178000 audit[2515]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2497 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623865323934316164346264353030343464316561633766613165 Jan 24 00:29:01.178000 audit: BPF prog-id=99 op=LOAD Jan 24 00:29:01.178000 audit[2515]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2497 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623865323934316164346264353030343464316561633766613165 Jan 24 00:29:01.179000 audit: BPF prog-id=99 op=UNLOAD Jan 24 00:29:01.179000 audit[2515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2497 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623865323934316164346264353030343464316561633766613165 Jan 24 00:29:01.179000 audit: BPF prog-id=98 op=UNLOAD Jan 24 00:29:01.179000 audit[2515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2497 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623865323934316164346264353030343464316561633766613165 Jan 24 00:29:01.179000 audit: BPF prog-id=100 op=LOAD Jan 24 00:29:01.179000 audit[2515]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2497 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361623865323934316164346264353030343464316561633766613165 Jan 24 00:29:01.188266 kubelet[2411]: E0124 00:29:01.188079 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.200.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-200-140?timeout=10s\": dial tcp 172.234.200.140:6443: connect: connection refused" interval="800ms" Jan 24 00:29:01.192000 audit: BPF prog-id=101 op=LOAD Jan 24 00:29:01.192000 audit: BPF prog-id=102 op=LOAD Jan 24 00:29:01.192000 audit[2529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2481 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.192000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261313663386534393261663861646530306235616236646633653839 Jan 24 00:29:01.193000 audit: BPF prog-id=102 op=UNLOAD Jan 24 00:29:01.193000 audit[2529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2481 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261313663386534393261663861646530306235616236646633653839 Jan 24 00:29:01.193000 audit: BPF prog-id=103 op=LOAD Jan 24 00:29:01.193000 audit[2529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2481 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261313663386534393261663861646530306235616236646633653839 Jan 24 00:29:01.194000 audit: BPF prog-id=104 op=LOAD Jan 24 00:29:01.194000 audit[2529]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2481 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261313663386534393261663861646530306235616236646633653839 Jan 24 00:29:01.194000 audit: BPF prog-id=104 op=UNLOAD Jan 24 00:29:01.194000 audit[2529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2481 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261313663386534393261663861646530306235616236646633653839 Jan 24 00:29:01.194000 audit: BPF prog-id=103 op=UNLOAD Jan 24 00:29:01.194000 audit[2529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2481 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261313663386534393261663861646530306235616236646633653839 Jan 24 00:29:01.194000 audit: BPF prog-id=105 op=LOAD Jan 24 00:29:01.194000 audit[2529]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2481 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261313663386534393261663861646530306235616236646633653839 Jan 24 00:29:01.197500 containerd[1583]: time="2026-01-24T00:29:01.197358958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-200-140,Uid:bee5a78f536f63bf489787fd6be95d47,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aaf5050e85692fdd9444d306c44e52c60d1bfba99f962e56f559f98e71a8664\"" Jan 24 00:29:01.199052 kubelet[2411]: E0124 00:29:01.199026 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:01.200950 containerd[1583]: time="2026-01-24T00:29:01.200925146Z" level=info msg="CreateContainer within sandbox \"6aaf5050e85692fdd9444d306c44e52c60d1bfba99f962e56f559f98e71a8664\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:29:01.209989 containerd[1583]: time="2026-01-24T00:29:01.209959741Z" level=info msg="Container 670a25b3abc8b26c418c6f0a7598a2ee511a47e74c7c18da1f7833848ed11fd2: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:29:01.231023 containerd[1583]: time="2026-01-24T00:29:01.230914661Z" level=info msg="CreateContainer within sandbox \"6aaf5050e85692fdd9444d306c44e52c60d1bfba99f962e56f559f98e71a8664\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"670a25b3abc8b26c418c6f0a7598a2ee511a47e74c7c18da1f7833848ed11fd2\"" Jan 24 00:29:01.231344 containerd[1583]: time="2026-01-24T00:29:01.231324861Z" level=info msg="StartContainer for \"670a25b3abc8b26c418c6f0a7598a2ee511a47e74c7c18da1f7833848ed11fd2\"" Jan 24 00:29:01.232291 containerd[1583]: time="2026-01-24T00:29:01.232269870Z" level=info msg="connecting to shim 670a25b3abc8b26c418c6f0a7598a2ee511a47e74c7c18da1f7833848ed11fd2" address="unix:///run/containerd/s/03ecc92b419507635ccd137383b6304d51f0abf41e53044c3d584bec3b8279e5" protocol=ttrpc version=3 Jan 24 00:29:01.249538 containerd[1583]: time="2026-01-24T00:29:01.249455612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-200-140,Uid:4f6998f0fbe2ee19682d62c3879de96a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ab8e2941ad4bd50044d1eac7fa1eac49c665ab9584f8ad446ae856f919c6611\"" Jan 24 00:29:01.252404 kubelet[2411]: E0124 00:29:01.252357 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:01.262067 containerd[1583]: time="2026-01-24T00:29:01.261853885Z" level=info msg="CreateContainer within sandbox \"3ab8e2941ad4bd50044d1eac7fa1eac49c665ab9584f8ad446ae856f919c6611\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:29:01.273782 systemd[1]: Started cri-containerd-670a25b3abc8b26c418c6f0a7598a2ee511a47e74c7c18da1f7833848ed11fd2.scope - libcontainer container 670a25b3abc8b26c418c6f0a7598a2ee511a47e74c7c18da1f7833848ed11fd2. Jan 24 00:29:01.277456 containerd[1583]: time="2026-01-24T00:29:01.277425068Z" level=info msg="Container b023d91cd2c0db39e0a638f4e9ab62feee821b9c29fd0512f358dd0159186fe0: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:29:01.282592 containerd[1583]: time="2026-01-24T00:29:01.282570685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-200-140,Uid:814956290f6db45816faf08cf94d87b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba16c8e492af8ade00b5ab6df3e89035f484e800c3c571f636c2fc279635b174\"" Jan 24 00:29:01.283160 kubelet[2411]: E0124 00:29:01.283143 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:01.285215 containerd[1583]: time="2026-01-24T00:29:01.284832724Z" level=info msg="CreateContainer within sandbox \"3ab8e2941ad4bd50044d1eac7fa1eac49c665ab9584f8ad446ae856f919c6611\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b023d91cd2c0db39e0a638f4e9ab62feee821b9c29fd0512f358dd0159186fe0\"" Jan 24 00:29:01.288162 containerd[1583]: time="2026-01-24T00:29:01.285520143Z" level=info msg="StartContainer for \"b023d91cd2c0db39e0a638f4e9ab62feee821b9c29fd0512f358dd0159186fe0\"" Jan 24 00:29:01.288162 containerd[1583]: time="2026-01-24T00:29:01.285920083Z" level=info msg="CreateContainer within sandbox \"ba16c8e492af8ade00b5ab6df3e89035f484e800c3c571f636c2fc279635b174\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:29:01.288162 containerd[1583]: time="2026-01-24T00:29:01.288117722Z" level=info msg="connecting to shim b023d91cd2c0db39e0a638f4e9ab62feee821b9c29fd0512f358dd0159186fe0" address="unix:///run/containerd/s/51a07a4f9c800ad7ee5341455fd93b684be6ced210fcde58b7055abb98da36db" protocol=ttrpc version=3 Jan 24 00:29:01.296000 audit: BPF prog-id=106 op=LOAD Jan 24 00:29:01.296000 audit: BPF prog-id=107 op=LOAD Jan 24 00:29:01.296000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2452 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637306132356233616263386232366334313863366630613735393861 Jan 24 00:29:01.296000 audit: BPF prog-id=107 op=UNLOAD Jan 24 00:29:01.296000 audit[2571]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2452 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637306132356233616263386232366334313863366630613735393861 Jan 24 00:29:01.298000 audit: BPF prog-id=108 op=LOAD Jan 24 00:29:01.298000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2452 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637306132356233616263386232366334313863366630613735393861 Jan 24 00:29:01.298000 audit: BPF prog-id=109 op=LOAD Jan 24 00:29:01.298000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2452 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637306132356233616263386232366334313863366630613735393861 Jan 24 00:29:01.298000 audit: BPF prog-id=109 op=UNLOAD Jan 24 00:29:01.298000 audit[2571]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2452 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637306132356233616263386232366334313863366630613735393861 Jan 24 00:29:01.298000 audit: BPF prog-id=108 op=UNLOAD Jan 24 00:29:01.298000 audit[2571]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2452 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637306132356233616263386232366334313863366630613735393861 Jan 24 00:29:01.298000 audit: BPF prog-id=110 op=LOAD Jan 24 00:29:01.298000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2452 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637306132356233616263386232366334313863366630613735393861 Jan 24 00:29:01.299757 containerd[1583]: time="2026-01-24T00:29:01.298329347Z" level=info msg="Container 146f343a639fba4479973da4844bfececbb11d19ac21229d5bcd16b9c573231c: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:29:01.314414 containerd[1583]: time="2026-01-24T00:29:01.314113489Z" level=info msg="CreateContainer within sandbox \"ba16c8e492af8ade00b5ab6df3e89035f484e800c3c571f636c2fc279635b174\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"146f343a639fba4479973da4844bfececbb11d19ac21229d5bcd16b9c573231c\"" Jan 24 00:29:01.318801 containerd[1583]: time="2026-01-24T00:29:01.318751027Z" level=info msg="StartContainer for \"146f343a639fba4479973da4844bfececbb11d19ac21229d5bcd16b9c573231c\"" Jan 24 00:29:01.320863 systemd[1]: Started cri-containerd-b023d91cd2c0db39e0a638f4e9ab62feee821b9c29fd0512f358dd0159186fe0.scope - libcontainer container b023d91cd2c0db39e0a638f4e9ab62feee821b9c29fd0512f358dd0159186fe0. Jan 24 00:29:01.324572 containerd[1583]: time="2026-01-24T00:29:01.324537584Z" level=info msg="connecting to shim 146f343a639fba4479973da4844bfececbb11d19ac21229d5bcd16b9c573231c" address="unix:///run/containerd/s/3e5f3e3979f0ed327fec353e73a7bc19bea7588d3c7f61192a15cacfd13e269a" protocol=ttrpc version=3 Jan 24 00:29:01.361082 systemd[1]: Started cri-containerd-146f343a639fba4479973da4844bfececbb11d19ac21229d5bcd16b9c573231c.scope - libcontainer container 146f343a639fba4479973da4844bfececbb11d19ac21229d5bcd16b9c573231c. Jan 24 00:29:01.365362 kubelet[2411]: I0124 00:29:01.365289 2411 kubelet_node_status.go:75] "Attempting to register node" node="172-234-200-140" Jan 24 00:29:01.366238 kubelet[2411]: E0124 00:29:01.366085 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.200.140:6443/api/v1/nodes\": dial tcp 172.234.200.140:6443: connect: connection refused" node="172-234-200-140" Jan 24 00:29:01.370000 audit: BPF prog-id=111 op=LOAD Jan 24 00:29:01.371000 audit: BPF prog-id=112 op=LOAD Jan 24 00:29:01.371000 audit[2599]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2497 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230323364393163643263306462333965306136333866346539616236 Jan 24 00:29:01.372000 audit: BPF prog-id=112 op=UNLOAD Jan 24 00:29:01.372000 audit[2599]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2497 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230323364393163643263306462333965306136333866346539616236 Jan 24 00:29:01.372000 audit: BPF prog-id=113 op=LOAD Jan 24 00:29:01.372000 audit[2599]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2497 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230323364393163643263306462333965306136333866346539616236 Jan 24 00:29:01.372000 audit: BPF prog-id=114 op=LOAD Jan 24 00:29:01.372000 audit[2599]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2497 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230323364393163643263306462333965306136333866346539616236 Jan 24 00:29:01.373000 audit: BPF prog-id=114 op=UNLOAD Jan 24 00:29:01.373000 audit[2599]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2497 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230323364393163643263306462333965306136333866346539616236 Jan 24 00:29:01.373000 audit: BPF prog-id=113 op=UNLOAD Jan 24 00:29:01.373000 audit[2599]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2497 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230323364393163643263306462333965306136333866346539616236 Jan 24 00:29:01.373000 audit: BPF prog-id=115 op=LOAD Jan 24 00:29:01.373000 audit[2599]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2497 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230323364393163643263306462333965306136333866346539616236 Jan 24 00:29:01.375240 containerd[1583]: time="2026-01-24T00:29:01.374670409Z" level=info msg="StartContainer for \"670a25b3abc8b26c418c6f0a7598a2ee511a47e74c7c18da1f7833848ed11fd2\" returns successfully" Jan 24 00:29:01.408000 audit: BPF prog-id=116 op=LOAD Jan 24 00:29:01.409000 audit: BPF prog-id=117 op=LOAD Jan 24 00:29:01.409000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2481 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366633343361363339666261343437393937336461343834346266 Jan 24 00:29:01.409000 audit: BPF prog-id=117 op=UNLOAD Jan 24 00:29:01.409000 audit[2618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2481 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366633343361363339666261343437393937336461343834346266 Jan 24 00:29:01.410000 audit: BPF prog-id=118 op=LOAD Jan 24 00:29:01.410000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2481 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366633343361363339666261343437393937336461343834346266 Jan 24 00:29:01.410000 audit: BPF prog-id=119 op=LOAD Jan 24 00:29:01.410000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2481 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366633343361363339666261343437393937336461343834346266 Jan 24 00:29:01.410000 audit: BPF prog-id=119 op=UNLOAD Jan 24 00:29:01.410000 audit[2618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2481 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366633343361363339666261343437393937336461343834346266 Jan 24 00:29:01.410000 audit: BPF prog-id=118 op=UNLOAD Jan 24 00:29:01.410000 audit[2618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2481 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366633343361363339666261343437393937336461343834346266 Jan 24 00:29:01.410000 audit: BPF prog-id=120 op=LOAD Jan 24 00:29:01.410000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2481 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:01.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366633343361363339666261343437393937336461343834346266 Jan 24 00:29:01.444548 containerd[1583]: time="2026-01-24T00:29:01.444501254Z" level=info msg="StartContainer for \"b023d91cd2c0db39e0a638f4e9ab62feee821b9c29fd0512f358dd0159186fe0\" returns successfully" Jan 24 00:29:01.485491 containerd[1583]: time="2026-01-24T00:29:01.485416124Z" level=info msg="StartContainer for \"146f343a639fba4479973da4844bfececbb11d19ac21229d5bcd16b9c573231c\" returns successfully" Jan 24 00:29:01.627912 kubelet[2411]: E0124 00:29:01.627740 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-140\" not found" node="172-234-200-140" Jan 24 00:29:01.628015 kubelet[2411]: E0124 00:29:01.627944 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:01.634043 kubelet[2411]: E0124 00:29:01.633791 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-140\" not found" node="172-234-200-140" Jan 24 00:29:01.634043 kubelet[2411]: E0124 00:29:01.633942 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:01.637056 kubelet[2411]: E0124 00:29:01.637026 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-140\" not found" node="172-234-200-140" Jan 24 00:29:01.637238 kubelet[2411]: E0124 00:29:01.637209 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:02.169721 kubelet[2411]: I0124 00:29:02.169635 2411 kubelet_node_status.go:75] "Attempting to register node" node="172-234-200-140" Jan 24 00:29:02.640821 kubelet[2411]: E0124 00:29:02.640404 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-140\" not found" node="172-234-200-140" Jan 24 00:29:02.640821 kubelet[2411]: E0124 00:29:02.640525 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:02.640821 kubelet[2411]: E0124 00:29:02.640703 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-140\" not found" node="172-234-200-140" Jan 24 00:29:02.640821 kubelet[2411]: E0124 00:29:02.640773 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:03.094957 kubelet[2411]: E0124 00:29:03.094908 2411 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-200-140\" not found" node="172-234-200-140" Jan 24 00:29:03.123251 kubelet[2411]: I0124 00:29:03.123204 2411 kubelet_node_status.go:78] "Successfully registered node" node="172-234-200-140" Jan 24 00:29:03.186284 kubelet[2411]: I0124 00:29:03.186241 2411 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-200-140" Jan 24 00:29:03.242225 kubelet[2411]: E0124 00:29:03.242004 2411 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-200-140\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-200-140" Jan 24 00:29:03.242225 kubelet[2411]: I0124 00:29:03.242034 2411 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-200-140" Jan 24 00:29:03.249939 kubelet[2411]: E0124 00:29:03.249480 2411 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-200-140\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-200-140" Jan 24 00:29:03.249939 kubelet[2411]: I0124 00:29:03.249515 2411 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:03.251595 kubelet[2411]: E0124 00:29:03.251538 2411 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-200-140\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:03.580636 kubelet[2411]: I0124 00:29:03.580607 2411 apiserver.go:52] "Watching apiserver" Jan 24 00:29:03.587331 kubelet[2411]: I0124 00:29:03.587295 2411 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:29:05.105067 systemd[1]: Reload requested from client PID 2677 ('systemctl') (unit session-8.scope)... Jan 24 00:29:05.105092 systemd[1]: Reloading... Jan 24 00:29:05.220101 zram_generator::config[2724]: No configuration found. Jan 24 00:29:05.456742 systemd[1]: Reloading finished in 351 ms. Jan 24 00:29:05.495079 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:29:05.516408 kernel: kauditd_printk_skb: 210 callbacks suppressed Jan 24 00:29:05.516810 kernel: audit: type=1131 audit(1769214545.514:401): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:29:05.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:29:05.514884 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:29:05.515194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:29:05.515250 systemd[1]: kubelet.service: Consumed 823ms CPU time, 131.8M memory peak. Jan 24 00:29:05.519657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:29:05.521000 audit: BPF prog-id=121 op=LOAD Jan 24 00:29:05.521000 audit: BPF prog-id=67 op=UNLOAD Jan 24 00:29:05.527478 kernel: audit: type=1334 audit(1769214545.521:402): prog-id=121 op=LOAD Jan 24 00:29:05.527522 kernel: audit: type=1334 audit(1769214545.521:403): prog-id=67 op=UNLOAD Jan 24 00:29:05.527739 kernel: audit: type=1334 audit(1769214545.526:404): prog-id=122 op=LOAD Jan 24 00:29:05.527760 kernel: audit: type=1334 audit(1769214545.526:405): prog-id=88 op=UNLOAD Jan 24 00:29:05.527782 kernel: audit: type=1334 audit(1769214545.526:406): prog-id=123 op=LOAD Jan 24 00:29:05.527801 kernel: audit: type=1334 audit(1769214545.526:407): prog-id=124 op=LOAD Jan 24 00:29:05.527822 kernel: audit: type=1334 audit(1769214545.526:408): prog-id=89 op=UNLOAD Jan 24 00:29:05.527843 kernel: audit: type=1334 audit(1769214545.526:409): prog-id=90 op=UNLOAD Jan 24 00:29:05.527865 kernel: audit: type=1334 audit(1769214545.526:410): prog-id=125 op=LOAD Jan 24 00:29:05.526000 audit: BPF prog-id=122 op=LOAD Jan 24 00:29:05.526000 audit: BPF prog-id=88 op=UNLOAD Jan 24 00:29:05.526000 audit: BPF prog-id=123 op=LOAD Jan 24 00:29:05.526000 audit: BPF prog-id=124 op=LOAD Jan 24 00:29:05.526000 audit: BPF prog-id=89 op=UNLOAD Jan 24 00:29:05.526000 audit: BPF prog-id=90 op=UNLOAD Jan 24 00:29:05.526000 audit: BPF prog-id=125 op=LOAD Jan 24 00:29:05.526000 audit: BPF prog-id=126 op=LOAD Jan 24 00:29:05.526000 audit: BPF prog-id=69 op=UNLOAD Jan 24 00:29:05.526000 audit: BPF prog-id=70 op=UNLOAD Jan 24 00:29:05.528000 audit: BPF prog-id=127 op=LOAD Jan 24 00:29:05.528000 audit: BPF prog-id=82 op=UNLOAD Jan 24 00:29:05.528000 audit: BPF prog-id=128 op=LOAD Jan 24 00:29:05.528000 audit: BPF prog-id=129 op=LOAD Jan 24 00:29:05.528000 audit: BPF prog-id=83 op=UNLOAD Jan 24 00:29:05.528000 audit: BPF prog-id=84 op=UNLOAD Jan 24 00:29:05.529000 audit: BPF prog-id=130 op=LOAD Jan 24 00:29:05.529000 audit: BPF prog-id=68 op=UNLOAD Jan 24 00:29:05.531000 audit: BPF prog-id=131 op=LOAD Jan 24 00:29:05.531000 audit: BPF prog-id=71 op=UNLOAD Jan 24 00:29:05.531000 audit: BPF prog-id=132 op=LOAD Jan 24 00:29:05.531000 audit: BPF prog-id=133 op=LOAD Jan 24 00:29:05.531000 audit: BPF prog-id=72 op=UNLOAD Jan 24 00:29:05.531000 audit: BPF prog-id=73 op=UNLOAD Jan 24 00:29:05.533000 audit: BPF prog-id=134 op=LOAD Jan 24 00:29:05.533000 audit: BPF prog-id=78 op=UNLOAD Jan 24 00:29:05.533000 audit: BPF prog-id=135 op=LOAD Jan 24 00:29:05.533000 audit: BPF prog-id=136 op=LOAD Jan 24 00:29:05.533000 audit: BPF prog-id=79 op=UNLOAD Jan 24 00:29:05.533000 audit: BPF prog-id=80 op=UNLOAD Jan 24 00:29:05.535000 audit: BPF prog-id=137 op=LOAD Jan 24 00:29:05.535000 audit: BPF prog-id=85 op=UNLOAD Jan 24 00:29:05.535000 audit: BPF prog-id=138 op=LOAD Jan 24 00:29:05.535000 audit: BPF prog-id=139 op=LOAD Jan 24 00:29:05.535000 audit: BPF prog-id=86 op=UNLOAD Jan 24 00:29:05.535000 audit: BPF prog-id=87 op=UNLOAD Jan 24 00:29:05.536000 audit: BPF prog-id=140 op=LOAD Jan 24 00:29:05.536000 audit: BPF prog-id=77 op=UNLOAD Jan 24 00:29:05.538000 audit: BPF prog-id=141 op=LOAD Jan 24 00:29:05.538000 audit: BPF prog-id=81 op=UNLOAD Jan 24 00:29:05.540000 audit: BPF prog-id=142 op=LOAD Jan 24 00:29:05.540000 audit: BPF prog-id=74 op=UNLOAD Jan 24 00:29:05.540000 audit: BPF prog-id=143 op=LOAD Jan 24 00:29:05.540000 audit: BPF prog-id=144 op=LOAD Jan 24 00:29:05.540000 audit: BPF prog-id=75 op=UNLOAD Jan 24 00:29:05.540000 audit: BPF prog-id=76 op=UNLOAD Jan 24 00:29:05.711822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:29:05.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:29:05.720715 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:29:05.755255 kubelet[2775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:29:05.755255 kubelet[2775]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:29:05.755255 kubelet[2775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:29:05.755966 kubelet[2775]: I0124 00:29:05.755321 2775 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:29:05.761724 kubelet[2775]: I0124 00:29:05.761695 2775 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:29:05.761724 kubelet[2775]: I0124 00:29:05.761715 2775 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:29:05.761912 kubelet[2775]: I0124 00:29:05.761891 2775 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:29:05.763425 kubelet[2775]: I0124 00:29:05.763233 2775 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:29:05.766513 kubelet[2775]: I0124 00:29:05.766493 2775 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:29:05.769391 kubelet[2775]: I0124 00:29:05.769362 2775 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 24 00:29:05.772761 kubelet[2775]: I0124 00:29:05.772738 2775 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:29:05.773047 kubelet[2775]: I0124 00:29:05.773014 2775 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:29:05.776078 kubelet[2775]: I0124 00:29:05.773042 2775 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-200-140","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:29:05.776168 kubelet[2775]: I0124 00:29:05.776110 2775 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:29:05.776168 kubelet[2775]: I0124 00:29:05.776121 2775 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:29:05.776168 kubelet[2775]: I0124 00:29:05.776167 2775 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:29:05.776366 kubelet[2775]: I0124 00:29:05.776331 2775 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:29:05.778950 kubelet[2775]: I0124 00:29:05.776362 2775 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:29:05.778950 kubelet[2775]: I0124 00:29:05.778476 2775 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:29:05.778950 kubelet[2775]: I0124 00:29:05.778489 2775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:29:05.783468 kubelet[2775]: I0124 00:29:05.783441 2775 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 24 00:29:05.783743 kubelet[2775]: I0124 00:29:05.783726 2775 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:29:05.784070 kubelet[2775]: I0124 00:29:05.784052 2775 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:29:05.784104 kubelet[2775]: I0124 00:29:05.784076 2775 server.go:1287] "Started kubelet" Jan 24 00:29:05.787365 kubelet[2775]: I0124 00:29:05.787247 2775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:29:05.790862 kubelet[2775]: I0124 00:29:05.790582 2775 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:29:05.792748 kubelet[2775]: I0124 00:29:05.792701 2775 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:29:05.793748 kubelet[2775]: I0124 00:29:05.793691 2775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:29:05.794065 kubelet[2775]: I0124 00:29:05.794045 2775 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:29:05.795359 kubelet[2775]: I0124 00:29:05.795336 2775 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:29:05.798786 kubelet[2775]: I0124 00:29:05.798767 2775 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:29:05.800120 kubelet[2775]: I0124 00:29:05.800046 2775 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:29:05.800200 kubelet[2775]: I0124 00:29:05.800161 2775 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:29:05.804839 kubelet[2775]: I0124 00:29:05.804769 2775 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:29:05.806502 kubelet[2775]: I0124 00:29:05.806258 2775 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:29:05.806502 kubelet[2775]: I0124 00:29:05.806271 2775 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:29:05.810041 kubelet[2775]: I0124 00:29:05.810014 2775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:29:05.811290 kubelet[2775]: I0124 00:29:05.811275 2775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:29:05.811421 kubelet[2775]: I0124 00:29:05.811394 2775 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:29:05.811515 kubelet[2775]: I0124 00:29:05.811502 2775 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:29:05.811578 kubelet[2775]: I0124 00:29:05.811569 2775 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:29:05.811670 kubelet[2775]: E0124 00:29:05.811654 2775 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:29:05.833147 kubelet[2775]: E0124 00:29:05.833113 2775 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:29:05.860248 kubelet[2775]: I0124 00:29:05.860222 2775 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:29:05.860445 kubelet[2775]: I0124 00:29:05.860369 2775 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:29:05.860445 kubelet[2775]: I0124 00:29:05.860419 2775 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:29:05.860773 kubelet[2775]: I0124 00:29:05.860758 2775 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:29:05.860917 kubelet[2775]: I0124 00:29:05.860831 2775 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:29:05.860967 kubelet[2775]: I0124 00:29:05.860959 2775 policy_none.go:49] "None policy: Start" Jan 24 00:29:05.861014 kubelet[2775]: I0124 00:29:05.861007 2775 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:29:05.861068 kubelet[2775]: I0124 00:29:05.861059 2775 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:29:05.861230 kubelet[2775]: I0124 00:29:05.861220 2775 state_mem.go:75] "Updated machine memory state" Jan 24 00:29:05.867276 kubelet[2775]: I0124 00:29:05.866602 2775 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:29:05.867276 kubelet[2775]: I0124 00:29:05.866787 2775 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:29:05.867276 kubelet[2775]: I0124 00:29:05.866807 2775 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:29:05.867276 kubelet[2775]: I0124 00:29:05.866992 2775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:29:05.868311 kubelet[2775]: E0124 00:29:05.868259 2775 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:29:05.913135 kubelet[2775]: I0124 00:29:05.913092 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-200-140" Jan 24 00:29:05.915396 kubelet[2775]: I0124 00:29:05.913662 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:05.915396 kubelet[2775]: I0124 00:29:05.914518 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-200-140" Jan 24 00:29:05.982457 kubelet[2775]: I0124 00:29:05.982152 2775 kubelet_node_status.go:75] "Attempting to register node" node="172-234-200-140" Jan 24 00:29:05.989091 kubelet[2775]: I0124 00:29:05.989050 2775 kubelet_node_status.go:124] "Node was previously registered" node="172-234-200-140" Jan 24 00:29:05.989166 kubelet[2775]: I0124 00:29:05.989144 2775 kubelet_node_status.go:78] "Successfully registered node" node="172-234-200-140" Jan 24 00:29:06.000764 kubelet[2775]: I0124 00:29:06.000731 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bee5a78f536f63bf489787fd6be95d47-ca-certs\") pod \"kube-apiserver-172-234-200-140\" (UID: \"bee5a78f536f63bf489787fd6be95d47\") " pod="kube-system/kube-apiserver-172-234-200-140" Jan 24 00:29:06.000764 kubelet[2775]: I0124 00:29:06.000757 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bee5a78f536f63bf489787fd6be95d47-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-200-140\" (UID: \"bee5a78f536f63bf489787fd6be95d47\") " pod="kube-system/kube-apiserver-172-234-200-140" Jan 24 00:29:06.000864 kubelet[2775]: I0124 00:29:06.000776 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f6998f0fbe2ee19682d62c3879de96a-ca-certs\") pod \"kube-controller-manager-172-234-200-140\" (UID: \"4f6998f0fbe2ee19682d62c3879de96a\") " pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:06.000864 kubelet[2775]: I0124 00:29:06.000792 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f6998f0fbe2ee19682d62c3879de96a-kubeconfig\") pod \"kube-controller-manager-172-234-200-140\" (UID: \"4f6998f0fbe2ee19682d62c3879de96a\") " pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:06.000864 kubelet[2775]: I0124 00:29:06.000805 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/814956290f6db45816faf08cf94d87b3-kubeconfig\") pod \"kube-scheduler-172-234-200-140\" (UID: \"814956290f6db45816faf08cf94d87b3\") " pod="kube-system/kube-scheduler-172-234-200-140" Jan 24 00:29:06.000864 kubelet[2775]: I0124 00:29:06.000819 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bee5a78f536f63bf489787fd6be95d47-k8s-certs\") pod \"kube-apiserver-172-234-200-140\" (UID: \"bee5a78f536f63bf489787fd6be95d47\") " pod="kube-system/kube-apiserver-172-234-200-140" Jan 24 00:29:06.000864 kubelet[2775]: I0124 00:29:06.000833 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f6998f0fbe2ee19682d62c3879de96a-flexvolume-dir\") pod \"kube-controller-manager-172-234-200-140\" (UID: \"4f6998f0fbe2ee19682d62c3879de96a\") " pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:06.000971 kubelet[2775]: I0124 00:29:06.000847 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f6998f0fbe2ee19682d62c3879de96a-k8s-certs\") pod \"kube-controller-manager-172-234-200-140\" (UID: \"4f6998f0fbe2ee19682d62c3879de96a\") " pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:06.000971 kubelet[2775]: I0124 00:29:06.000863 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f6998f0fbe2ee19682d62c3879de96a-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-200-140\" (UID: \"4f6998f0fbe2ee19682d62c3879de96a\") " pod="kube-system/kube-controller-manager-172-234-200-140" Jan 24 00:29:06.219633 kubelet[2775]: E0124 00:29:06.219319 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:06.221704 kubelet[2775]: E0124 00:29:06.221508 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:06.222940 kubelet[2775]: E0124 00:29:06.222878 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:06.778835 kubelet[2775]: I0124 00:29:06.778800 2775 apiserver.go:52] "Watching apiserver" Jan 24 00:29:06.801395 kubelet[2775]: I0124 00:29:06.800663 2775 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:29:06.845653 kubelet[2775]: E0124 00:29:06.845602 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:06.846046 kubelet[2775]: I0124 00:29:06.846031 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-200-140" Jan 24 00:29:06.846220 kubelet[2775]: I0124 00:29:06.846204 2775 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-200-140" Jan 24 00:29:06.852043 kubelet[2775]: E0124 00:29:06.851866 2775 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-200-140\" already exists" pod="kube-system/kube-scheduler-172-234-200-140" Jan 24 00:29:06.852043 kubelet[2775]: E0124 00:29:06.851999 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:06.852312 kubelet[2775]: E0124 00:29:06.852285 2775 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-200-140\" already exists" pod="kube-system/kube-apiserver-172-234-200-140" Jan 24 00:29:06.852578 kubelet[2775]: E0124 00:29:06.852504 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:06.861576 kubelet[2775]: I0124 00:29:06.861497 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-200-140" podStartSLOduration=1.861488095 podStartE2EDuration="1.861488095s" podCreationTimestamp="2026-01-24 00:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:29:06.861445005 +0000 UTC m=+1.136033913" watchObservedRunningTime="2026-01-24 00:29:06.861488095 +0000 UTC m=+1.136077003" Jan 24 00:29:06.873518 kubelet[2775]: I0124 00:29:06.873465 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-200-140" podStartSLOduration=1.873456099 podStartE2EDuration="1.873456099s" podCreationTimestamp="2026-01-24 00:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:29:06.867271972 +0000 UTC m=+1.141860880" watchObservedRunningTime="2026-01-24 00:29:06.873456099 +0000 UTC m=+1.148045007" Jan 24 00:29:06.879711 kubelet[2775]: I0124 00:29:06.879672 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-200-140" podStartSLOduration=1.879663826 podStartE2EDuration="1.879663826s" podCreationTimestamp="2026-01-24 00:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:29:06.874138408 +0000 UTC m=+1.148727326" watchObservedRunningTime="2026-01-24 00:29:06.879663826 +0000 UTC m=+1.154252734" Jan 24 00:29:07.846695 kubelet[2775]: E0124 00:29:07.846659 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:07.847628 kubelet[2775]: E0124 00:29:07.847555 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:08.848165 kubelet[2775]: E0124 00:29:08.848118 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:10.538037 kubelet[2775]: I0124 00:29:10.538007 2775 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:29:10.538589 containerd[1583]: time="2026-01-24T00:29:10.538536356Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:29:10.538966 kubelet[2775]: I0124 00:29:10.538790 2775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:29:10.677077 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 00:29:10.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:29:10.678587 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 24 00:29:10.678636 kernel: audit: type=1131 audit(1769214550.677:451): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:29:10.693000 audit: BPF prog-id=134 op=UNLOAD Jan 24 00:29:10.696414 kernel: audit: type=1334 audit(1769214550.693:452): prog-id=134 op=UNLOAD Jan 24 00:29:11.553131 systemd[1]: Created slice kubepods-besteffort-pode733bc15_0d70_4abc_b5cf_597f87834be4.slice - libcontainer container kubepods-besteffort-pode733bc15_0d70_4abc_b5cf_597f87834be4.slice. Jan 24 00:29:11.638690 kubelet[2775]: I0124 00:29:11.638642 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e733bc15-0d70-4abc-b5cf-597f87834be4-lib-modules\") pod \"kube-proxy-gfpvc\" (UID: \"e733bc15-0d70-4abc-b5cf-597f87834be4\") " pod="kube-system/kube-proxy-gfpvc" Jan 24 00:29:11.638690 kubelet[2775]: I0124 00:29:11.638688 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e733bc15-0d70-4abc-b5cf-597f87834be4-kube-proxy\") pod \"kube-proxy-gfpvc\" (UID: \"e733bc15-0d70-4abc-b5cf-597f87834be4\") " pod="kube-system/kube-proxy-gfpvc" Jan 24 00:29:11.639204 kubelet[2775]: I0124 00:29:11.638709 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e733bc15-0d70-4abc-b5cf-597f87834be4-xtables-lock\") pod \"kube-proxy-gfpvc\" (UID: \"e733bc15-0d70-4abc-b5cf-597f87834be4\") " pod="kube-system/kube-proxy-gfpvc" Jan 24 00:29:11.639204 kubelet[2775]: I0124 00:29:11.638725 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnsxq\" (UniqueName: \"kubernetes.io/projected/e733bc15-0d70-4abc-b5cf-597f87834be4-kube-api-access-gnsxq\") pod \"kube-proxy-gfpvc\" (UID: \"e733bc15-0d70-4abc-b5cf-597f87834be4\") " pod="kube-system/kube-proxy-gfpvc" Jan 24 00:29:11.670891 kubelet[2775]: W0124 00:29:11.670859 2775 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-234-200-140" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-234-200-140' and this object Jan 24 00:29:11.670891 kubelet[2775]: E0124 00:29:11.670903 2775 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-234-200-140\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-234-200-140' and this object" logger="UnhandledError" Jan 24 00:29:11.678239 systemd[1]: Created slice kubepods-besteffort-podc789c7f0_be5e_4a02_bea2_5755d320c0ad.slice - libcontainer container kubepods-besteffort-podc789c7f0_be5e_4a02_bea2_5755d320c0ad.slice. Jan 24 00:29:11.739741 kubelet[2775]: I0124 00:29:11.739669 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9267\" (UniqueName: \"kubernetes.io/projected/c789c7f0-be5e-4a02-bea2-5755d320c0ad-kube-api-access-w9267\") pod \"tigera-operator-7dcd859c48-6xl9s\" (UID: \"c789c7f0-be5e-4a02-bea2-5755d320c0ad\") " pod="tigera-operator/tigera-operator-7dcd859c48-6xl9s" Jan 24 00:29:11.739741 kubelet[2775]: I0124 00:29:11.739726 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c789c7f0-be5e-4a02-bea2-5755d320c0ad-var-lib-calico\") pod \"tigera-operator-7dcd859c48-6xl9s\" (UID: \"c789c7f0-be5e-4a02-bea2-5755d320c0ad\") " pod="tigera-operator/tigera-operator-7dcd859c48-6xl9s" Jan 24 00:29:11.861184 kubelet[2775]: E0124 00:29:11.861072 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:11.862074 containerd[1583]: time="2026-01-24T00:29:11.861895084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gfpvc,Uid:e733bc15-0d70-4abc-b5cf-597f87834be4,Namespace:kube-system,Attempt:0,}" Jan 24 00:29:11.879530 containerd[1583]: time="2026-01-24T00:29:11.879483865Z" level=info msg="connecting to shim ba81435110bca723a7ea84c8790261df07c22c5d47a5fc010658f78cc6e6a35d" address="unix:///run/containerd/s/bf8ec94165a4f0061ca3338967fb123209307d12a8bd9cd45e0f6419ab78a488" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:11.910565 systemd[1]: Started cri-containerd-ba81435110bca723a7ea84c8790261df07c22c5d47a5fc010658f78cc6e6a35d.scope - libcontainer container ba81435110bca723a7ea84c8790261df07c22c5d47a5fc010658f78cc6e6a35d. Jan 24 00:29:11.927000 audit: BPF prog-id=145 op=LOAD Jan 24 00:29:11.927000 audit: BPF prog-id=146 op=LOAD Jan 24 00:29:11.930779 kernel: audit: type=1334 audit(1769214551.927:453): prog-id=145 op=LOAD Jan 24 00:29:11.930831 kernel: audit: type=1334 audit(1769214551.927:454): prog-id=146 op=LOAD Jan 24 00:29:11.927000 audit[2840]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230238 a2=98 a3=0 items=0 ppid=2830 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:11.934206 kernel: audit: type=1300 audit(1769214551.927:454): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230238 a2=98 a3=0 items=0 ppid=2830 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:11.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383134333531313062636137323361376561383463383739303236 Jan 24 00:29:11.942530 kernel: audit: type=1327 audit(1769214551.927:454): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383134333531313062636137323361376561383463383739303236 Jan 24 00:29:11.948078 kernel: audit: type=1334 audit(1769214551.927:455): prog-id=146 op=UNLOAD Jan 24 00:29:11.927000 audit: BPF prog-id=146 op=UNLOAD Jan 24 00:29:11.952399 kernel: audit: type=1300 audit(1769214551.927:455): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2830 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:11.927000 audit[2840]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2830 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:11.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383134333531313062636137323361376561383463383739303236 Jan 24 00:29:11.960279 kernel: audit: type=1327 audit(1769214551.927:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383134333531313062636137323361376561383463383739303236 Jan 24 00:29:11.961152 containerd[1583]: time="2026-01-24T00:29:11.961106394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gfpvc,Uid:e733bc15-0d70-4abc-b5cf-597f87834be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba81435110bca723a7ea84c8790261df07c22c5d47a5fc010658f78cc6e6a35d\"" Jan 24 00:29:11.962113 kubelet[2775]: E0124 00:29:11.962052 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:11.966357 kernel: audit: type=1334 audit(1769214551.927:456): prog-id=147 op=LOAD Jan 24 00:29:11.927000 audit: BPF prog-id=147 op=LOAD Jan 24 00:29:11.966461 containerd[1583]: time="2026-01-24T00:29:11.966034782Z" level=info msg="CreateContainer within sandbox \"ba81435110bca723a7ea84c8790261df07c22c5d47a5fc010658f78cc6e6a35d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:29:11.927000 audit[2840]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230488 a2=98 a3=0 items=0 ppid=2830 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:11.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383134333531313062636137323361376561383463383739303236 Jan 24 00:29:11.928000 audit: BPF prog-id=148 op=LOAD Jan 24 00:29:11.928000 audit[2840]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000230218 a2=98 a3=0 items=0 ppid=2830 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:11.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383134333531313062636137323361376561383463383739303236 Jan 24 00:29:11.928000 audit: BPF prog-id=148 op=UNLOAD Jan 24 00:29:11.928000 audit[2840]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2830 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:11.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383134333531313062636137323361376561383463383739303236 Jan 24 00:29:11.928000 audit: BPF prog-id=147 op=UNLOAD Jan 24 00:29:11.928000 audit[2840]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2830 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:11.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383134333531313062636137323361376561383463383739303236 Jan 24 00:29:11.928000 audit: BPF prog-id=149 op=LOAD Jan 24 00:29:11.928000 audit[2840]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002306e8 a2=98 a3=0 items=0 ppid=2830 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:11.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261383134333531313062636137323361376561383463383739303236 Jan 24 00:29:11.980240 containerd[1583]: time="2026-01-24T00:29:11.980210995Z" level=info msg="Container 4f587fb2bec8685ca5ce9852c9ece75a26c7af4c94419d7359ff80e64f33edf4: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:29:11.982798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255898100.mount: Deactivated successfully. Jan 24 00:29:11.987534 containerd[1583]: time="2026-01-24T00:29:11.987503841Z" level=info msg="CreateContainer within sandbox \"ba81435110bca723a7ea84c8790261df07c22c5d47a5fc010658f78cc6e6a35d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f587fb2bec8685ca5ce9852c9ece75a26c7af4c94419d7359ff80e64f33edf4\"" Jan 24 00:29:11.988431 containerd[1583]: time="2026-01-24T00:29:11.987974971Z" level=info msg="StartContainer for \"4f587fb2bec8685ca5ce9852c9ece75a26c7af4c94419d7359ff80e64f33edf4\"" Jan 24 00:29:11.989898 containerd[1583]: time="2026-01-24T00:29:11.989523350Z" level=info msg="connecting to shim 4f587fb2bec8685ca5ce9852c9ece75a26c7af4c94419d7359ff80e64f33edf4" address="unix:///run/containerd/s/bf8ec94165a4f0061ca3338967fb123209307d12a8bd9cd45e0f6419ab78a488" protocol=ttrpc version=3 Jan 24 00:29:12.009559 systemd[1]: Started cri-containerd-4f587fb2bec8685ca5ce9852c9ece75a26c7af4c94419d7359ff80e64f33edf4.scope - libcontainer container 4f587fb2bec8685ca5ce9852c9ece75a26c7af4c94419d7359ff80e64f33edf4. Jan 24 00:29:12.068000 audit: BPF prog-id=150 op=LOAD Jan 24 00:29:12.068000 audit[2867]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2830 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466353837666232626563383638356361356365393835326339656365 Jan 24 00:29:12.068000 audit: BPF prog-id=151 op=LOAD Jan 24 00:29:12.068000 audit[2867]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2830 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466353837666232626563383638356361356365393835326339656365 Jan 24 00:29:12.068000 audit: BPF prog-id=151 op=UNLOAD Jan 24 00:29:12.068000 audit[2867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2830 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466353837666232626563383638356361356365393835326339656365 Jan 24 00:29:12.068000 audit: BPF prog-id=150 op=UNLOAD Jan 24 00:29:12.068000 audit[2867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2830 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466353837666232626563383638356361356365393835326339656365 Jan 24 00:29:12.068000 audit: BPF prog-id=152 op=LOAD Jan 24 00:29:12.068000 audit[2867]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2830 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466353837666232626563383638356361356365393835326339656365 Jan 24 00:29:12.088344 containerd[1583]: time="2026-01-24T00:29:12.088270201Z" level=info msg="StartContainer for \"4f587fb2bec8685ca5ce9852c9ece75a26c7af4c94419d7359ff80e64f33edf4\" returns successfully" Jan 24 00:29:12.221000 audit[2929]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=2929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.221000 audit[2929]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcea2988d0 a2=0 a3=7ffcea2988bc items=0 ppid=2879 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.221000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 00:29:12.224000 audit[2931]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=2931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.224000 audit[2931]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc310cec40 a2=0 a3=7ffc310cec2c items=0 ppid=2879 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 00:29:12.226000 audit[2932]: NETFILTER_CFG table=mangle:56 family=2 entries=1 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.226000 audit[2932]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff057881b0 a2=0 a3=7fff0578819c items=0 ppid=2879 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 24 00:29:12.228000 audit[2933]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=2933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.228000 audit[2933]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe610f75f0 a2=0 a3=7ffe610f75dc items=0 ppid=2879 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.228000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 24 00:29:12.230000 audit[2934]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=2934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.230000 audit[2934]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd04645810 a2=0 a3=7ffd046457fc items=0 ppid=2879 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 00:29:12.232000 audit[2935]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.232000 audit[2935]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe00a7b000 a2=0 a3=7ffe00a7afec items=0 ppid=2879 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 24 00:29:12.327000 audit[2936]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.327000 audit[2936]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe11d4b8a0 a2=0 a3=7ffe11d4b88c items=0 ppid=2879 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.327000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 00:29:12.331000 audit[2938]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.331000 audit[2938]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd5cf98880 a2=0 a3=7ffd5cf9886c items=0 ppid=2879 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.331000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 24 00:29:12.335000 audit[2941]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.335000 audit[2941]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff9e82a280 a2=0 a3=7fff9e82a26c items=0 ppid=2879 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 24 00:29:12.337000 audit[2942]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.337000 audit[2942]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbe550ca0 a2=0 a3=7ffdbe550c8c items=0 ppid=2879 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.337000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 00:29:12.340000 audit[2944]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.340000 audit[2944]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd52bbac60 a2=0 a3=7ffd52bbac4c items=0 ppid=2879 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 00:29:12.342000 audit[2945]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.342000 audit[2945]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd4521c90 a2=0 a3=7ffcd4521c7c items=0 ppid=2879 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:29:12.345000 audit[2947]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.345000 audit[2947]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff10eecb50 a2=0 a3=7fff10eecb3c items=0 ppid=2879 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 24 00:29:12.350000 audit[2950]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.350000 audit[2950]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffca9db9fb0 a2=0 a3=7ffca9db9f9c items=0 ppid=2879 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 24 00:29:12.351000 audit[2951]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.351000 audit[2951]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc00570870 a2=0 a3=7ffc0057085c items=0 ppid=2879 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 00:29:12.355000 audit[2953]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.355000 audit[2953]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9c808120 a2=0 a3=7ffd9c80810c items=0 ppid=2879 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 00:29:12.356000 audit[2954]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.356000 audit[2954]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3845a640 a2=0 a3=7fff3845a62c items=0 ppid=2879 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 00:29:12.359000 audit[2956]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.359000 audit[2956]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd49d151d0 a2=0 a3=7ffd49d151bc items=0 ppid=2879 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.359000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 00:29:12.363000 audit[2959]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.363000 audit[2959]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd23107c90 a2=0 a3=7ffd23107c7c items=0 ppid=2879 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.363000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 00:29:12.368000 audit[2962]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.368000 audit[2962]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd6ad687a0 a2=0 a3=7ffd6ad6878c items=0 ppid=2879 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.368000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 24 00:29:12.370000 audit[2963]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.370000 audit[2963]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe0f4d5a40 a2=0 a3=7ffe0f4d5a2c items=0 ppid=2879 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.370000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 00:29:12.373000 audit[2965]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=2965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.373000 audit[2965]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc8745c2f0 a2=0 a3=7ffc8745c2dc items=0 ppid=2879 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.373000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:29:12.377000 audit[2968]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=2968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.377000 audit[2968]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdd10a14b0 a2=0 a3=7ffdd10a149c items=0 ppid=2879 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.377000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:29:12.378000 audit[2969]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.378000 audit[2969]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc79d9f570 a2=0 a3=7ffc79d9f55c items=0 ppid=2879 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 00:29:12.382000 audit[2971]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 24 00:29:12.382000 audit[2971]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd31890e70 a2=0 a3=7ffd31890e5c items=0 ppid=2879 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 00:29:12.404000 audit[2977]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=2977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:12.404000 audit[2977]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc576135c0 a2=0 a3=7ffc576135ac items=0 ppid=2879 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.404000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:12.411000 audit[2977]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=2977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:12.411000 audit[2977]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc576135c0 a2=0 a3=7ffc576135ac items=0 ppid=2879 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:12.412000 audit[2982]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=2982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.412000 audit[2982]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe697716b0 a2=0 a3=7ffe6977169c items=0 ppid=2879 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 24 00:29:12.416000 audit[2984]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=2984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.416000 audit[2984]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe5536f8d0 a2=0 a3=7ffe5536f8bc items=0 ppid=2879 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 24 00:29:12.420000 audit[2987]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=2987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.420000 audit[2987]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdc6f2b850 a2=0 a3=7ffdc6f2b83c items=0 ppid=2879 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.420000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 24 00:29:12.422000 audit[2988]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.422000 audit[2988]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbe0caa70 a2=0 a3=7ffcbe0caa5c items=0 ppid=2879 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 24 00:29:12.425000 audit[2990]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.425000 audit[2990]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffce3101af0 a2=0 a3=7ffce3101adc items=0 ppid=2879 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.425000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 24 00:29:12.426000 audit[2991]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.426000 audit[2991]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc4ccdc40 a2=0 a3=7ffdc4ccdc2c items=0 ppid=2879 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 24 00:29:12.429000 audit[2993]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=2993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.429000 audit[2993]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc07243230 a2=0 a3=7ffc0724321c items=0 ppid=2879 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.429000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 24 00:29:12.434000 audit[2996]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=2996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.434000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdd6f251d0 a2=0 a3=7ffdd6f251bc items=0 ppid=2879 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.434000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 24 00:29:12.435000 audit[2997]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.435000 audit[2997]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd981f41f0 a2=0 a3=7ffd981f41dc items=0 ppid=2879 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.435000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 24 00:29:12.439000 audit[2999]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.439000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2192baa0 a2=0 a3=7ffd2192ba8c items=0 ppid=2879 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.439000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 24 00:29:12.440000 audit[3000]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.440000 audit[3000]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff9537f100 a2=0 a3=7fff9537f0ec items=0 ppid=2879 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 24 00:29:12.443000 audit[3002]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.443000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde8d9b770 a2=0 a3=7ffde8d9b75c items=0 ppid=2879 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 24 00:29:12.448000 audit[3005]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.448000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd3a76c0c0 a2=0 a3=7ffd3a76c0ac items=0 ppid=2879 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.448000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 24 00:29:12.453000 audit[3008]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.453000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe721b6550 a2=0 a3=7ffe721b653c items=0 ppid=2879 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 24 00:29:12.455000 audit[3009]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.455000 audit[3009]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa0503270 a2=0 a3=7fffa050325c items=0 ppid=2879 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 24 00:29:12.458000 audit[3011]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.458000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff81d78ed0 a2=0 a3=7fff81d78ebc items=0 ppid=2879 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:29:12.463000 audit[3014]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.463000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe089c1470 a2=0 a3=7ffe089c145c items=0 ppid=2879 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 24 00:29:12.464000 audit[3015]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.464000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc2007b70 a2=0 a3=7ffdc2007b5c items=0 ppid=2879 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 24 00:29:12.467000 audit[3017]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.467000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd4de2f8f0 a2=0 a3=7ffd4de2f8dc items=0 ppid=2879 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 24 00:29:12.469000 audit[3018]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.469000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc72a2c3f0 a2=0 a3=7ffc72a2c3dc items=0 ppid=2879 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.469000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 24 00:29:12.472000 audit[3020]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.472000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe23fc1400 a2=0 a3=7ffe23fc13ec items=0 ppid=2879 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:29:12.476000 audit[3023]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 24 00:29:12.476000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffff1f22430 a2=0 a3=7ffff1f2241c items=0 ppid=2879 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 24 00:29:12.481000 audit[3025]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 00:29:12.481000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff592788c0 a2=0 a3=7fff592788ac items=0 ppid=2879 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.481000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:12.481000 audit[3025]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 24 00:29:12.481000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff592788c0 a2=0 a3=7fff592788ac items=0 ppid=2879 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:12.481000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:12.845069 kubelet[2775]: E0124 00:29:12.844989 2775 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:29:12.845069 kubelet[2775]: E0124 00:29:12.845027 2775 projected.go:194] Error preparing data for projected volume kube-api-access-w9267 for pod tigera-operator/tigera-operator-7dcd859c48-6xl9s: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:29:12.845069 kubelet[2775]: E0124 00:29:12.845092 2775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c789c7f0-be5e-4a02-bea2-5755d320c0ad-kube-api-access-w9267 podName:c789c7f0-be5e-4a02-bea2-5755d320c0ad nodeName:}" failed. No retries permitted until 2026-01-24 00:29:13.345073252 +0000 UTC m=+7.619662170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w9267" (UniqueName: "kubernetes.io/projected/c789c7f0-be5e-4a02-bea2-5755d320c0ad-kube-api-access-w9267") pod "tigera-operator-7dcd859c48-6xl9s" (UID: "c789c7f0-be5e-4a02-bea2-5755d320c0ad") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:29:12.856513 kubelet[2775]: E0124 00:29:12.856424 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:13.485520 containerd[1583]: time="2026-01-24T00:29:13.485459222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6xl9s,Uid:c789c7f0-be5e-4a02-bea2-5755d320c0ad,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:29:13.501942 containerd[1583]: time="2026-01-24T00:29:13.501886894Z" level=info msg="connecting to shim 4a887818b81f3d437782d5980a7a5057b475850b227f6348ab7be2602139ef59" address="unix:///run/containerd/s/9e05192bf065aa3fd8af41a62156856e796dae20709616faa214c736c34c8ad0" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:13.527537 systemd[1]: Started cri-containerd-4a887818b81f3d437782d5980a7a5057b475850b227f6348ab7be2602139ef59.scope - libcontainer container 4a887818b81f3d437782d5980a7a5057b475850b227f6348ab7be2602139ef59. Jan 24 00:29:13.540000 audit: BPF prog-id=153 op=LOAD Jan 24 00:29:13.540000 audit: BPF prog-id=154 op=LOAD Jan 24 00:29:13.540000 audit[3047]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3036 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:13.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461383837383138623831663364343337373832643539383061376135 Jan 24 00:29:13.540000 audit: BPF prog-id=154 op=UNLOAD Jan 24 00:29:13.540000 audit[3047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3036 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:13.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461383837383138623831663364343337373832643539383061376135 Jan 24 00:29:13.541000 audit: BPF prog-id=155 op=LOAD Jan 24 00:29:13.541000 audit[3047]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3036 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:13.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461383837383138623831663364343337373832643539383061376135 Jan 24 00:29:13.541000 audit: BPF prog-id=156 op=LOAD Jan 24 00:29:13.541000 audit[3047]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3036 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:13.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461383837383138623831663364343337373832643539383061376135 Jan 24 00:29:13.541000 audit: BPF prog-id=156 op=UNLOAD Jan 24 00:29:13.541000 audit[3047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3036 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:13.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461383837383138623831663364343337373832643539383061376135 Jan 24 00:29:13.541000 audit: BPF prog-id=155 op=UNLOAD Jan 24 00:29:13.541000 audit[3047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3036 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:13.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461383837383138623831663364343337373832643539383061376135 Jan 24 00:29:13.541000 audit: BPF prog-id=157 op=LOAD Jan 24 00:29:13.541000 audit[3047]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3036 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:13.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461383837383138623831663364343337373832643539383061376135 Jan 24 00:29:13.573418 containerd[1583]: time="2026-01-24T00:29:13.573196048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6xl9s,Uid:c789c7f0-be5e-4a02-bea2-5755d320c0ad,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4a887818b81f3d437782d5980a7a5057b475850b227f6348ab7be2602139ef59\"" Jan 24 00:29:13.576326 containerd[1583]: time="2026-01-24T00:29:13.576261577Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:29:14.196857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844417669.mount: Deactivated successfully. Jan 24 00:29:14.668226 containerd[1583]: time="2026-01-24T00:29:14.668163031Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:14.669295 containerd[1583]: time="2026-01-24T00:29:14.669135920Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Jan 24 00:29:14.669876 containerd[1583]: time="2026-01-24T00:29:14.669844970Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:14.671500 containerd[1583]: time="2026-01-24T00:29:14.671464859Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:14.672132 containerd[1583]: time="2026-01-24T00:29:14.672103339Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.095810942s" Jan 24 00:29:14.672195 containerd[1583]: time="2026-01-24T00:29:14.672182029Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:29:14.674235 containerd[1583]: time="2026-01-24T00:29:14.674183468Z" level=info msg="CreateContainer within sandbox \"4a887818b81f3d437782d5980a7a5057b475850b227f6348ab7be2602139ef59\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:29:14.682364 containerd[1583]: time="2026-01-24T00:29:14.682329953Z" level=info msg="Container 02dbecbd098328e723307988c9d5974f703acf126d57d1bcbb061b54ed655ee3: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:29:14.689369 containerd[1583]: time="2026-01-24T00:29:14.689326860Z" level=info msg="CreateContainer within sandbox \"4a887818b81f3d437782d5980a7a5057b475850b227f6348ab7be2602139ef59\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"02dbecbd098328e723307988c9d5974f703acf126d57d1bcbb061b54ed655ee3\"" Jan 24 00:29:14.689961 containerd[1583]: time="2026-01-24T00:29:14.689923720Z" level=info msg="StartContainer for \"02dbecbd098328e723307988c9d5974f703acf126d57d1bcbb061b54ed655ee3\"" Jan 24 00:29:14.690952 containerd[1583]: time="2026-01-24T00:29:14.690924379Z" level=info msg="connecting to shim 02dbecbd098328e723307988c9d5974f703acf126d57d1bcbb061b54ed655ee3" address="unix:///run/containerd/s/9e05192bf065aa3fd8af41a62156856e796dae20709616faa214c736c34c8ad0" protocol=ttrpc version=3 Jan 24 00:29:14.715528 systemd[1]: Started cri-containerd-02dbecbd098328e723307988c9d5974f703acf126d57d1bcbb061b54ed655ee3.scope - libcontainer container 02dbecbd098328e723307988c9d5974f703acf126d57d1bcbb061b54ed655ee3. Jan 24 00:29:14.729000 audit: BPF prog-id=158 op=LOAD Jan 24 00:29:14.729000 audit: BPF prog-id=159 op=LOAD Jan 24 00:29:14.729000 audit[3080]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3036 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032646265636264303938333238653732333330373938386339643539 Jan 24 00:29:14.729000 audit: BPF prog-id=159 op=UNLOAD Jan 24 00:29:14.729000 audit[3080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3036 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032646265636264303938333238653732333330373938386339643539 Jan 24 00:29:14.729000 audit: BPF prog-id=160 op=LOAD Jan 24 00:29:14.729000 audit[3080]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3036 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032646265636264303938333238653732333330373938386339643539 Jan 24 00:29:14.729000 audit: BPF prog-id=161 op=LOAD Jan 24 00:29:14.729000 audit[3080]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3036 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032646265636264303938333238653732333330373938386339643539 Jan 24 00:29:14.729000 audit: BPF prog-id=161 op=UNLOAD Jan 24 00:29:14.729000 audit[3080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3036 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032646265636264303938333238653732333330373938386339643539 Jan 24 00:29:14.729000 audit: BPF prog-id=160 op=UNLOAD Jan 24 00:29:14.729000 audit[3080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3036 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032646265636264303938333238653732333330373938386339643539 Jan 24 00:29:14.729000 audit: BPF prog-id=162 op=LOAD Jan 24 00:29:14.729000 audit[3080]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3036 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032646265636264303938333238653732333330373938386339643539 Jan 24 00:29:14.751536 containerd[1583]: time="2026-01-24T00:29:14.751323169Z" level=info msg="StartContainer for \"02dbecbd098328e723307988c9d5974f703acf126d57d1bcbb061b54ed655ee3\" returns successfully" Jan 24 00:29:14.870940 kubelet[2775]: I0124 00:29:14.870742 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gfpvc" podStartSLOduration=3.870708789 podStartE2EDuration="3.870708789s" podCreationTimestamp="2026-01-24 00:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:29:12.865328192 +0000 UTC m=+7.139917100" watchObservedRunningTime="2026-01-24 00:29:14.870708789 +0000 UTC m=+9.145297697" Jan 24 00:29:15.219322 kubelet[2775]: E0124 00:29:15.219275 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:15.235011 kubelet[2775]: I0124 00:29:15.234705 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-6xl9s" podStartSLOduration=3.137440166 podStartE2EDuration="4.234693827s" podCreationTimestamp="2026-01-24 00:29:11 +0000 UTC" firstStartedPulling="2026-01-24 00:29:13.575649547 +0000 UTC m=+7.850238455" lastFinishedPulling="2026-01-24 00:29:14.672903208 +0000 UTC m=+8.947492116" observedRunningTime="2026-01-24 00:29:14.872020109 +0000 UTC m=+9.146609017" watchObservedRunningTime="2026-01-24 00:29:15.234693827 +0000 UTC m=+9.509282745" Jan 24 00:29:15.864409 kubelet[2775]: E0124 00:29:15.863542 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:16.275237 kubelet[2775]: E0124 00:29:16.274908 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:16.865467 kubelet[2775]: E0124 00:29:16.865416 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:17.681746 systemd-resolved[1275]: Clock change detected. Flushing caches. Jan 24 00:29:17.681991 systemd-timesyncd[1510]: Contacted time server [2600:3c01::f03c:93ff:fedd:5a1f]:123 (2.flatcar.pool.ntp.org). Jan 24 00:29:17.682049 systemd-timesyncd[1510]: Initial clock synchronization to Sat 2026-01-24 00:29:17.681693 UTC. Jan 24 00:29:17.696149 systemd[1]: cri-containerd-02dbecbd098328e723307988c9d5974f703acf126d57d1bcbb061b54ed655ee3.scope: Deactivated successfully. Jan 24 00:29:17.702422 containerd[1583]: time="2026-01-24T00:29:17.702275890Z" level=info msg="received container exit event container_id:\"02dbecbd098328e723307988c9d5974f703acf126d57d1bcbb061b54ed655ee3\" id:\"02dbecbd098328e723307988c9d5974f703acf126d57d1bcbb061b54ed655ee3\" pid:3092 exit_status:1 exited_at:{seconds:1769214557 nanos:699956821}" Jan 24 00:29:17.726202 kernel: kauditd_printk_skb: 226 callbacks suppressed Jan 24 00:29:17.726301 kernel: audit: type=1334 audit(1769214557.720:533): prog-id=158 op=UNLOAD Jan 24 00:29:17.720000 audit: BPF prog-id=158 op=UNLOAD Jan 24 00:29:17.720000 audit: BPF prog-id=162 op=UNLOAD Jan 24 00:29:17.730707 kernel: audit: type=1334 audit(1769214557.720:534): prog-id=162 op=UNLOAD Jan 24 00:29:17.741990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02dbecbd098328e723307988c9d5974f703acf126d57d1bcbb061b54ed655ee3-rootfs.mount: Deactivated successfully. Jan 24 00:29:18.381910 kubelet[2775]: I0124 00:29:18.381875 2775 scope.go:117] "RemoveContainer" containerID="02dbecbd098328e723307988c9d5974f703acf126d57d1bcbb061b54ed655ee3" Jan 24 00:29:18.384934 containerd[1583]: time="2026-01-24T00:29:18.384898138Z" level=info msg="CreateContainer within sandbox \"4a887818b81f3d437782d5980a7a5057b475850b227f6348ab7be2602139ef59\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 24 00:29:18.396668 containerd[1583]: time="2026-01-24T00:29:18.396407923Z" level=info msg="Container 540c7a1d1080537c6a0fb24db2b26caa9b859bafbde36457d066fa64b145a26d: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:29:18.413674 containerd[1583]: time="2026-01-24T00:29:18.413557824Z" level=info msg="CreateContainer within sandbox \"4a887818b81f3d437782d5980a7a5057b475850b227f6348ab7be2602139ef59\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"540c7a1d1080537c6a0fb24db2b26caa9b859bafbde36457d066fa64b145a26d\"" Jan 24 00:29:18.415666 containerd[1583]: time="2026-01-24T00:29:18.415448243Z" level=info msg="StartContainer for \"540c7a1d1080537c6a0fb24db2b26caa9b859bafbde36457d066fa64b145a26d\"" Jan 24 00:29:18.416252 containerd[1583]: time="2026-01-24T00:29:18.416225003Z" level=info msg="connecting to shim 540c7a1d1080537c6a0fb24db2b26caa9b859bafbde36457d066fa64b145a26d" address="unix:///run/containerd/s/9e05192bf065aa3fd8af41a62156856e796dae20709616faa214c736c34c8ad0" protocol=ttrpc version=3 Jan 24 00:29:18.456964 systemd[1]: Started cri-containerd-540c7a1d1080537c6a0fb24db2b26caa9b859bafbde36457d066fa64b145a26d.scope - libcontainer container 540c7a1d1080537c6a0fb24db2b26caa9b859bafbde36457d066fa64b145a26d. Jan 24 00:29:18.478000 audit: BPF prog-id=163 op=LOAD Jan 24 00:29:18.482806 kernel: audit: type=1334 audit(1769214558.478:535): prog-id=163 op=LOAD Jan 24 00:29:18.481000 audit: BPF prog-id=164 op=LOAD Jan 24 00:29:18.498161 kernel: audit: type=1334 audit(1769214558.481:536): prog-id=164 op=LOAD Jan 24 00:29:18.498269 kernel: audit: type=1300 audit(1769214558.481:536): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3036 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:18.481000 audit[3151]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3036 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:18.507189 kernel: audit: type=1327 audit(1769214558.481:536): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306337613164313038303533376336613066623234646232623236 Jan 24 00:29:18.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306337613164313038303533376336613066623234646232623236 Jan 24 00:29:18.509715 kernel: audit: type=1334 audit(1769214558.481:537): prog-id=164 op=UNLOAD Jan 24 00:29:18.481000 audit: BPF prog-id=164 op=UNLOAD Jan 24 00:29:18.517702 kernel: audit: type=1300 audit(1769214558.481:537): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3036 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:18.481000 audit[3151]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3036 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:18.525713 kernel: audit: type=1327 audit(1769214558.481:537): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306337613164313038303533376336613066623234646232623236 Jan 24 00:29:18.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306337613164313038303533376336613066623234646232623236 Jan 24 00:29:18.527781 kernel: audit: type=1334 audit(1769214558.481:538): prog-id=165 op=LOAD Jan 24 00:29:18.481000 audit: BPF prog-id=165 op=LOAD Jan 24 00:29:18.481000 audit[3151]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3036 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:18.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306337613164313038303533376336613066623234646232623236 Jan 24 00:29:18.482000 audit: BPF prog-id=166 op=LOAD Jan 24 00:29:18.482000 audit[3151]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3036 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:18.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306337613164313038303533376336613066623234646232623236 Jan 24 00:29:18.482000 audit: BPF prog-id=166 op=UNLOAD Jan 24 00:29:18.482000 audit[3151]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3036 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:18.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306337613164313038303533376336613066623234646232623236 Jan 24 00:29:18.482000 audit: BPF prog-id=165 op=UNLOAD Jan 24 00:29:18.482000 audit[3151]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3036 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:18.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306337613164313038303533376336613066623234646232623236 Jan 24 00:29:18.482000 audit: BPF prog-id=167 op=LOAD Jan 24 00:29:18.482000 audit[3151]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3036 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:18.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534306337613164313038303533376336613066623234646232623236 Jan 24 00:29:18.559128 containerd[1583]: time="2026-01-24T00:29:18.559090941Z" level=info msg="StartContainer for \"540c7a1d1080537c6a0fb24db2b26caa9b859bafbde36457d066fa64b145a26d\" returns successfully" Jan 24 00:29:18.736558 kubelet[2775]: E0124 00:29:18.736473 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:20.745466 sudo[1855]: pam_unix(sudo:session): session closed for user root Jan 24 00:29:20.743000 audit[1855]: USER_END pid=1855 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:29:20.743000 audit[1855]: CRED_DISP pid=1855 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 24 00:29:20.766970 sshd[1854]: Connection closed by 68.220.241.50 port 53066 Jan 24 00:29:20.768943 sshd-session[1850]: pam_unix(sshd:session): session closed for user core Jan 24 00:29:20.768000 audit[1850]: USER_END pid=1850 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:29:20.768000 audit[1850]: CRED_DISP pid=1850 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:29:20.773539 systemd-logind[1571]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:29:20.774568 systemd[1]: sshd@6-172.234.200.140:22-68.220.241.50:53066.service: Deactivated successfully. Jan 24 00:29:20.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.234.200.140:22-68.220.241.50:53066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:29:20.778432 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:29:20.778743 systemd[1]: session-8.scope: Consumed 4.407s CPU time, 228.6M memory peak. Jan 24 00:29:20.781343 systemd-logind[1571]: Removed session 8. Jan 24 00:29:22.562000 audit[3205]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:22.562000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc41a842b0 a2=0 a3=7ffc41a8429c items=0 ppid=2879 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:22.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:22.567000 audit[3205]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:22.567000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc41a842b0 a2=0 a3=0 items=0 ppid=2879 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:22.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:22.586000 audit[3207]: NETFILTER_CFG table=filter:107 family=2 entries=15 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:22.586000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd4a679700 a2=0 a3=7ffd4a6796ec items=0 ppid=2879 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:22.586000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:22.605000 audit[3207]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:22.605000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd4a679700 a2=0 a3=0 items=0 ppid=2879 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:22.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:24.663087 kernel: kauditd_printk_skb: 31 callbacks suppressed Jan 24 00:29:24.663227 kernel: audit: type=1325 audit(1769214564.655:552): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:24.655000 audit[3210]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:24.669711 kernel: audit: type=1300 audit(1769214564.655:552): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd3ec61a60 a2=0 a3=7ffd3ec61a4c items=0 ppid=2879 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:24.655000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd3ec61a60 a2=0 a3=7ffd3ec61a4c items=0 ppid=2879 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:24.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:24.682691 kernel: audit: type=1327 audit(1769214564.655:552): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:24.683000 audit[3210]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:24.690675 kernel: audit: type=1325 audit(1769214564.683:553): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:24.683000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3ec61a60 a2=0 a3=0 items=0 ppid=2879 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:24.727330 kernel: audit: type=1300 audit(1769214564.683:553): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3ec61a60 a2=0 a3=0 items=0 ppid=2879 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:24.727387 kernel: audit: type=1327 audit(1769214564.683:553): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:24.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:24.753000 audit[3212]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:24.761042 kernel: audit: type=1325 audit(1769214564.753:554): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:24.753000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe51819e00 a2=0 a3=7ffe51819dec items=0 ppid=2879 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:24.771152 kernel: audit: type=1300 audit(1769214564.753:554): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe51819e00 a2=0 a3=7ffe51819dec items=0 ppid=2879 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:24.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:24.777135 kernel: audit: type=1327 audit(1769214564.753:554): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:24.770000 audit[3212]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:24.770000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe51819e00 a2=0 a3=0 items=0 ppid=2879 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:24.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:24.782673 kernel: audit: type=1325 audit(1769214564.770:555): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:25.796000 audit[3214]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:25.796000 audit[3214]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc8b71cfe0 a2=0 a3=7ffc8b71cfcc items=0 ppid=2879 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:25.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:25.802000 audit[3214]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:25.802000 audit[3214]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc8b71cfe0 a2=0 a3=0 items=0 ppid=2879 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:25.802000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:26.170473 update_engine[1572]: I20260124 00:29:26.169744 1572 update_attempter.cc:509] Updating boot flags... Jan 24 00:29:26.465915 systemd[1]: Created slice kubepods-besteffort-poddbb7e0e4_8ccb_495b_a22f_0ba3cb54f2d7.slice - libcontainer container kubepods-besteffort-poddbb7e0e4_8ccb_495b_a22f_0ba3cb54f2d7.slice. Jan 24 00:29:26.549062 kubelet[2775]: I0124 00:29:26.549002 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8r5d\" (UniqueName: \"kubernetes.io/projected/dbb7e0e4-8ccb-495b-a22f-0ba3cb54f2d7-kube-api-access-d8r5d\") pod \"calico-typha-6d8c478c95-r4hcl\" (UID: \"dbb7e0e4-8ccb-495b-a22f-0ba3cb54f2d7\") " pod="calico-system/calico-typha-6d8c478c95-r4hcl" Jan 24 00:29:26.549577 kubelet[2775]: I0124 00:29:26.549159 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dbb7e0e4-8ccb-495b-a22f-0ba3cb54f2d7-typha-certs\") pod \"calico-typha-6d8c478c95-r4hcl\" (UID: \"dbb7e0e4-8ccb-495b-a22f-0ba3cb54f2d7\") " pod="calico-system/calico-typha-6d8c478c95-r4hcl" Jan 24 00:29:26.549577 kubelet[2775]: I0124 00:29:26.549186 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbb7e0e4-8ccb-495b-a22f-0ba3cb54f2d7-tigera-ca-bundle\") pod \"calico-typha-6d8c478c95-r4hcl\" (UID: \"dbb7e0e4-8ccb-495b-a22f-0ba3cb54f2d7\") " pod="calico-system/calico-typha-6d8c478c95-r4hcl" Jan 24 00:29:26.757821 systemd[1]: Created slice kubepods-besteffort-pod4b294bc6_db63_4a15_ab6d_1c4d82b485e2.slice - libcontainer container kubepods-besteffort-pod4b294bc6_db63_4a15_ab6d_1c4d82b485e2.slice. Jan 24 00:29:26.812000 audit[3243]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:26.812000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc36b11440 a2=0 a3=7ffc36b1142c items=0 ppid=2879 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:26.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:26.822000 audit[3243]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:26.822000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc36b11440 a2=0 a3=0 items=0 ppid=2879 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:26.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:26.851981 kubelet[2775]: I0124 00:29:26.851906 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c528h\" (UniqueName: \"kubernetes.io/projected/4b294bc6-db63-4a15-ab6d-1c4d82b485e2-kube-api-access-c528h\") pod \"calico-node-4p7jw\" (UID: \"4b294bc6-db63-4a15-ab6d-1c4d82b485e2\") " pod="calico-system/calico-node-4p7jw" Jan 24 00:29:26.851981 kubelet[2775]: I0124 00:29:26.851945 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4b294bc6-db63-4a15-ab6d-1c4d82b485e2-cni-net-dir\") pod \"calico-node-4p7jw\" (UID: \"4b294bc6-db63-4a15-ab6d-1c4d82b485e2\") " pod="calico-system/calico-node-4p7jw" Jan 24 00:29:26.852349 kubelet[2775]: I0124 00:29:26.852311 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4b294bc6-db63-4a15-ab6d-1c4d82b485e2-policysync\") pod \"calico-node-4p7jw\" (UID: \"4b294bc6-db63-4a15-ab6d-1c4d82b485e2\") " pod="calico-system/calico-node-4p7jw" Jan 24 00:29:26.852349 kubelet[2775]: I0124 00:29:26.852341 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b294bc6-db63-4a15-ab6d-1c4d82b485e2-lib-modules\") pod \"calico-node-4p7jw\" (UID: \"4b294bc6-db63-4a15-ab6d-1c4d82b485e2\") " pod="calico-system/calico-node-4p7jw" Jan 24 00:29:26.852506 kubelet[2775]: I0124 00:29:26.852405 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b294bc6-db63-4a15-ab6d-1c4d82b485e2-tigera-ca-bundle\") pod \"calico-node-4p7jw\" (UID: \"4b294bc6-db63-4a15-ab6d-1c4d82b485e2\") " pod="calico-system/calico-node-4p7jw" Jan 24 00:29:26.852506 kubelet[2775]: I0124 00:29:26.852424 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4b294bc6-db63-4a15-ab6d-1c4d82b485e2-cni-bin-dir\") pod \"calico-node-4p7jw\" (UID: \"4b294bc6-db63-4a15-ab6d-1c4d82b485e2\") " pod="calico-system/calico-node-4p7jw" Jan 24 00:29:26.852506 kubelet[2775]: I0124 00:29:26.852440 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4b294bc6-db63-4a15-ab6d-1c4d82b485e2-cni-log-dir\") pod \"calico-node-4p7jw\" (UID: \"4b294bc6-db63-4a15-ab6d-1c4d82b485e2\") " pod="calico-system/calico-node-4p7jw" Jan 24 00:29:26.852506 kubelet[2775]: I0124 00:29:26.852504 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4b294bc6-db63-4a15-ab6d-1c4d82b485e2-flexvol-driver-host\") pod \"calico-node-4p7jw\" (UID: \"4b294bc6-db63-4a15-ab6d-1c4d82b485e2\") " pod="calico-system/calico-node-4p7jw" Jan 24 00:29:26.852681 kubelet[2775]: I0124 00:29:26.852519 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4b294bc6-db63-4a15-ab6d-1c4d82b485e2-node-certs\") pod \"calico-node-4p7jw\" (UID: \"4b294bc6-db63-4a15-ab6d-1c4d82b485e2\") " pod="calico-system/calico-node-4p7jw" Jan 24 00:29:26.852681 kubelet[2775]: I0124 00:29:26.852571 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b294bc6-db63-4a15-ab6d-1c4d82b485e2-var-lib-calico\") pod \"calico-node-4p7jw\" (UID: \"4b294bc6-db63-4a15-ab6d-1c4d82b485e2\") " pod="calico-system/calico-node-4p7jw" Jan 24 00:29:26.852681 kubelet[2775]: I0124 00:29:26.852585 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4b294bc6-db63-4a15-ab6d-1c4d82b485e2-var-run-calico\") pod \"calico-node-4p7jw\" (UID: \"4b294bc6-db63-4a15-ab6d-1c4d82b485e2\") " pod="calico-system/calico-node-4p7jw" Jan 24 00:29:26.852681 kubelet[2775]: I0124 00:29:26.852599 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b294bc6-db63-4a15-ab6d-1c4d82b485e2-xtables-lock\") pod \"calico-node-4p7jw\" (UID: \"4b294bc6-db63-4a15-ab6d-1c4d82b485e2\") " pod="calico-system/calico-node-4p7jw" Jan 24 00:29:26.864930 kubelet[2775]: E0124 00:29:26.864792 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:26.865908 containerd[1583]: time="2026-01-24T00:29:26.865743507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d8c478c95-r4hcl,Uid:dbb7e0e4-8ccb-495b-a22f-0ba3cb54f2d7,Namespace:calico-system,Attempt:0,}" Jan 24 00:29:26.882520 kubelet[2775]: E0124 00:29:26.882451 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:29:26.900808 containerd[1583]: time="2026-01-24T00:29:26.900066270Z" level=info msg="connecting to shim e5a7af01a03df18ae9d7f0b1f0ff5b0f211a37f7594da87955da719d15bb5f78" address="unix:///run/containerd/s/17d48024e4b66543e83d0c34238f986f00690fb2c6a7ffdd664a0b8dd5cf0889" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:26.935068 systemd[1]: Started cri-containerd-e5a7af01a03df18ae9d7f0b1f0ff5b0f211a37f7594da87955da719d15bb5f78.scope - libcontainer container e5a7af01a03df18ae9d7f0b1f0ff5b0f211a37f7594da87955da719d15bb5f78. Jan 24 00:29:26.948000 audit: BPF prog-id=168 op=LOAD Jan 24 00:29:26.948000 audit: BPF prog-id=169 op=LOAD Jan 24 00:29:26.948000 audit[3262]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3252 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:26.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535613761663031613033646631386165396437663062316630666635 Jan 24 00:29:26.948000 audit: BPF prog-id=169 op=UNLOAD Jan 24 00:29:26.948000 audit[3262]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:26.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535613761663031613033646631386165396437663062316630666635 Jan 24 00:29:26.949000 audit: BPF prog-id=170 op=LOAD Jan 24 00:29:26.949000 audit[3262]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3252 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:26.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535613761663031613033646631386165396437663062316630666635 Jan 24 00:29:26.949000 audit: BPF prog-id=171 op=LOAD Jan 24 00:29:26.949000 audit[3262]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3252 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:26.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535613761663031613033646631386165396437663062316630666635 Jan 24 00:29:26.949000 audit: BPF prog-id=171 op=UNLOAD Jan 24 00:29:26.949000 audit[3262]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:26.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535613761663031613033646631386165396437663062316630666635 Jan 24 00:29:26.949000 audit: BPF prog-id=170 op=UNLOAD Jan 24 00:29:26.949000 audit[3262]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:26.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535613761663031613033646631386165396437663062316630666635 Jan 24 00:29:26.949000 audit: BPF prog-id=172 op=LOAD Jan 24 00:29:26.949000 audit[3262]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3252 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:26.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535613761663031613033646631386165396437663062316630666635 Jan 24 00:29:26.953140 kubelet[2775]: I0124 00:29:26.953100 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9b58f3e4-7a56-472e-b647-710386b89b86-varrun\") pod \"csi-node-driver-dzs24\" (UID: \"9b58f3e4-7a56-472e-b647-710386b89b86\") " pod="calico-system/csi-node-driver-dzs24" Jan 24 00:29:26.953673 kubelet[2775]: I0124 00:29:26.953402 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9b58f3e4-7a56-472e-b647-710386b89b86-registration-dir\") pod \"csi-node-driver-dzs24\" (UID: \"9b58f3e4-7a56-472e-b647-710386b89b86\") " pod="calico-system/csi-node-driver-dzs24" Jan 24 00:29:26.953673 kubelet[2775]: I0124 00:29:26.953456 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9b58f3e4-7a56-472e-b647-710386b89b86-socket-dir\") pod \"csi-node-driver-dzs24\" (UID: \"9b58f3e4-7a56-472e-b647-710386b89b86\") " pod="calico-system/csi-node-driver-dzs24" Jan 24 00:29:26.953673 kubelet[2775]: I0124 00:29:26.953472 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7qvg\" (UniqueName: \"kubernetes.io/projected/9b58f3e4-7a56-472e-b647-710386b89b86-kube-api-access-g7qvg\") pod \"csi-node-driver-dzs24\" (UID: \"9b58f3e4-7a56-472e-b647-710386b89b86\") " pod="calico-system/csi-node-driver-dzs24" Jan 24 00:29:26.953673 kubelet[2775]: I0124 00:29:26.953521 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b58f3e4-7a56-472e-b647-710386b89b86-kubelet-dir\") pod \"csi-node-driver-dzs24\" (UID: \"9b58f3e4-7a56-472e-b647-710386b89b86\") " pod="calico-system/csi-node-driver-dzs24" Jan 24 00:29:26.966157 kubelet[2775]: E0124 00:29:26.966127 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:26.966157 kubelet[2775]: W0124 00:29:26.966151 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:26.966246 kubelet[2775]: E0124 00:29:26.966200 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:26.976774 kubelet[2775]: E0124 00:29:26.976732 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:26.976774 kubelet[2775]: W0124 00:29:26.976771 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:26.976937 kubelet[2775]: E0124 00:29:26.976788 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.028579 containerd[1583]: time="2026-01-24T00:29:27.027450116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d8c478c95-r4hcl,Uid:dbb7e0e4-8ccb-495b-a22f-0ba3cb54f2d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5a7af01a03df18ae9d7f0b1f0ff5b0f211a37f7594da87955da719d15bb5f78\"" Jan 24 00:29:27.029412 kubelet[2775]: E0124 00:29:27.028988 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:27.030157 containerd[1583]: time="2026-01-24T00:29:27.030124635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:29:27.054841 kubelet[2775]: E0124 00:29:27.054802 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.054934 kubelet[2775]: W0124 00:29:27.054857 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.054934 kubelet[2775]: E0124 00:29:27.054882 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.056008 kubelet[2775]: E0124 00:29:27.055987 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.056008 kubelet[2775]: W0124 00:29:27.056004 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.057682 kubelet[2775]: E0124 00:29:27.056025 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.057682 kubelet[2775]: E0124 00:29:27.056264 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.057682 kubelet[2775]: W0124 00:29:27.056274 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.057682 kubelet[2775]: E0124 00:29:27.056283 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.057682 kubelet[2775]: E0124 00:29:27.056557 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.057682 kubelet[2775]: W0124 00:29:27.056565 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.057682 kubelet[2775]: E0124 00:29:27.056626 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.058489 kubelet[2775]: E0124 00:29:27.057064 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.058489 kubelet[2775]: W0124 00:29:27.058485 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.058545 kubelet[2775]: E0124 00:29:27.058507 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.058885 kubelet[2775]: E0124 00:29:27.058866 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.058885 kubelet[2775]: W0124 00:29:27.058880 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.059051 kubelet[2775]: E0124 00:29:27.058983 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.059285 kubelet[2775]: E0124 00:29:27.059264 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.059285 kubelet[2775]: W0124 00:29:27.059279 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.059393 kubelet[2775]: E0124 00:29:27.059367 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.059562 kubelet[2775]: E0124 00:29:27.059539 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.059562 kubelet[2775]: W0124 00:29:27.059557 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.059616 kubelet[2775]: E0124 00:29:27.059584 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.059939 kubelet[2775]: E0124 00:29:27.059920 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.060018 kubelet[2775]: W0124 00:29:27.059994 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.060077 kubelet[2775]: E0124 00:29:27.060065 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.060472 kubelet[2775]: E0124 00:29:27.060460 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.060533 kubelet[2775]: W0124 00:29:27.060522 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.060581 kubelet[2775]: E0124 00:29:27.060571 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.060871 kubelet[2775]: E0124 00:29:27.060848 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.060871 kubelet[2775]: W0124 00:29:27.060858 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.061744 kubelet[2775]: E0124 00:29:27.061686 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.062054 kubelet[2775]: E0124 00:29:27.062028 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.062054 kubelet[2775]: W0124 00:29:27.062040 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.062189 kubelet[2775]: E0124 00:29:27.062170 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.062456 kubelet[2775]: E0124 00:29:27.062432 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.062456 kubelet[2775]: W0124 00:29:27.062443 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.062603 kubelet[2775]: E0124 00:29:27.062591 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.062865 kubelet[2775]: E0124 00:29:27.062852 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.063723 kubelet[2775]: W0124 00:29:27.063708 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.064141 kubelet[2775]: E0124 00:29:27.063877 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:27.064418 kubelet[2775]: E0124 00:29:27.064400 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.064418 kubelet[2775]: E0124 00:29:27.069886 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.064418 kubelet[2775]: W0124 00:29:27.069935 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.064418 kubelet[2775]: E0124 00:29:27.070044 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.064418 kubelet[2775]: E0124 00:29:27.070235 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.064418 kubelet[2775]: W0124 00:29:27.070243 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.064418 kubelet[2775]: E0124 00:29:27.070357 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.064418 kubelet[2775]: E0124 00:29:27.070686 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.064418 kubelet[2775]: W0124 00:29:27.070696 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.064418 kubelet[2775]: E0124 00:29:27.070871 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.073403 kubelet[2775]: E0124 00:29:27.070956 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.073403 kubelet[2775]: W0124 00:29:27.070966 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.073403 kubelet[2775]: E0124 00:29:27.070997 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.073403 kubelet[2775]: E0124 00:29:27.071395 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.073403 kubelet[2775]: W0124 00:29:27.071406 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.073403 kubelet[2775]: E0124 00:29:27.071561 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.073403 kubelet[2775]: E0124 00:29:27.071967 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.073403 kubelet[2775]: W0124 00:29:27.071978 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.073403 kubelet[2775]: E0124 00:29:27.073300 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.074520 containerd[1583]: time="2026-01-24T00:29:27.064704897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4p7jw,Uid:4b294bc6-db63-4a15-ab6d-1c4d82b485e2,Namespace:calico-system,Attempt:0,}" Jan 24 00:29:27.074808 kubelet[2775]: E0124 00:29:27.074772 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.074808 kubelet[2775]: W0124 00:29:27.074794 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.077754 kubelet[2775]: E0124 00:29:27.077717 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.077802 kubelet[2775]: E0124 00:29:27.077770 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.078076 kubelet[2775]: W0124 00:29:27.077735 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.078076 kubelet[2775]: E0124 00:29:27.077922 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.078745 kubelet[2775]: E0124 00:29:27.078303 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.078745 kubelet[2775]: W0124 00:29:27.078744 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.078818 kubelet[2775]: E0124 00:29:27.078757 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.079203 kubelet[2775]: E0124 00:29:27.079181 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.079622 kubelet[2775]: W0124 00:29:27.079593 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.079622 kubelet[2775]: E0124 00:29:27.079622 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.087779 kubelet[2775]: E0124 00:29:27.087755 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.087779 kubelet[2775]: W0124 00:29:27.087773 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.088054 kubelet[2775]: E0124 00:29:27.087784 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.093632 kubelet[2775]: E0124 00:29:27.093583 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:27.093632 kubelet[2775]: W0124 00:29:27.093596 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:27.093632 kubelet[2775]: E0124 00:29:27.093607 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:27.106457 containerd[1583]: time="2026-01-24T00:29:27.106411577Z" level=info msg="connecting to shim cbaa34f48e8ee4ed513f65f33e5e3631c3249ead7ae5fd57d596aa9bf76aaacf" address="unix:///run/containerd/s/6a32d035b50fde913c7b542e4e3f46bbba69fab302c4e9b7715a2a62f6370818" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:27.150853 systemd[1]: Started cri-containerd-cbaa34f48e8ee4ed513f65f33e5e3631c3249ead7ae5fd57d596aa9bf76aaacf.scope - libcontainer container cbaa34f48e8ee4ed513f65f33e5e3631c3249ead7ae5fd57d596aa9bf76aaacf. Jan 24 00:29:27.169000 audit: BPF prog-id=173 op=LOAD Jan 24 00:29:27.169000 audit: BPF prog-id=174 op=LOAD Jan 24 00:29:27.169000 audit[3342]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3330 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:27.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362616133346634386538656534656435313366363566333365356533 Jan 24 00:29:27.169000 audit: BPF prog-id=174 op=UNLOAD Jan 24 00:29:27.169000 audit[3342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:27.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362616133346634386538656534656435313366363566333365356533 Jan 24 00:29:27.170000 audit: BPF prog-id=175 op=LOAD Jan 24 00:29:27.170000 audit[3342]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3330 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:27.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362616133346634386538656534656435313366363566333365356533 Jan 24 00:29:27.170000 audit: BPF prog-id=176 op=LOAD Jan 24 00:29:27.170000 audit[3342]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3330 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:27.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362616133346634386538656534656435313366363566333365356533 Jan 24 00:29:27.170000 audit: BPF prog-id=176 op=UNLOAD Jan 24 00:29:27.170000 audit[3342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:27.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362616133346634386538656534656435313366363566333365356533 Jan 24 00:29:27.170000 audit: BPF prog-id=175 op=UNLOAD Jan 24 00:29:27.170000 audit[3342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:27.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362616133346634386538656534656435313366363566333365356533 Jan 24 00:29:27.170000 audit: BPF prog-id=177 op=LOAD Jan 24 00:29:27.170000 audit[3342]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3330 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:27.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362616133346634386538656534656435313366363566333365356533 Jan 24 00:29:27.196438 containerd[1583]: time="2026-01-24T00:29:27.196375322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4p7jw,Uid:4b294bc6-db63-4a15-ab6d-1c4d82b485e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"cbaa34f48e8ee4ed513f65f33e5e3631c3249ead7ae5fd57d596aa9bf76aaacf\"" Jan 24 00:29:27.199025 kubelet[2775]: E0124 00:29:27.198416 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:28.222985 containerd[1583]: time="2026-01-24T00:29:28.222899478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:28.223885 containerd[1583]: time="2026-01-24T00:29:28.223857208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:28.224611 containerd[1583]: time="2026-01-24T00:29:28.224583547Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:28.227252 containerd[1583]: time="2026-01-24T00:29:28.226877086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:28.228737 containerd[1583]: time="2026-01-24T00:29:28.228623775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.19847016s" Jan 24 00:29:28.228737 containerd[1583]: time="2026-01-24T00:29:28.228694845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:29:28.230919 containerd[1583]: time="2026-01-24T00:29:28.230826044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:29:28.247107 containerd[1583]: time="2026-01-24T00:29:28.247069876Z" level=info msg="CreateContainer within sandbox \"e5a7af01a03df18ae9d7f0b1f0ff5b0f211a37f7594da87955da719d15bb5f78\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:29:28.254711 containerd[1583]: time="2026-01-24T00:29:28.253244963Z" level=info msg="Container d76945ad1ab94a7311ebe5cd201a84cc5c3322a306be1abbf0ce8f8d401b6070: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:29:28.260065 containerd[1583]: time="2026-01-24T00:29:28.260011260Z" level=info msg="CreateContainer within sandbox \"e5a7af01a03df18ae9d7f0b1f0ff5b0f211a37f7594da87955da719d15bb5f78\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d76945ad1ab94a7311ebe5cd201a84cc5c3322a306be1abbf0ce8f8d401b6070\"" Jan 24 00:29:28.260829 containerd[1583]: time="2026-01-24T00:29:28.260803699Z" level=info msg="StartContainer for \"d76945ad1ab94a7311ebe5cd201a84cc5c3322a306be1abbf0ce8f8d401b6070\"" Jan 24 00:29:28.261833 containerd[1583]: time="2026-01-24T00:29:28.261791409Z" level=info msg="connecting to shim d76945ad1ab94a7311ebe5cd201a84cc5c3322a306be1abbf0ce8f8d401b6070" address="unix:///run/containerd/s/17d48024e4b66543e83d0c34238f986f00690fb2c6a7ffdd664a0b8dd5cf0889" protocol=ttrpc version=3 Jan 24 00:29:28.283815 systemd[1]: Started cri-containerd-d76945ad1ab94a7311ebe5cd201a84cc5c3322a306be1abbf0ce8f8d401b6070.scope - libcontainer container d76945ad1ab94a7311ebe5cd201a84cc5c3322a306be1abbf0ce8f8d401b6070. Jan 24 00:29:28.298000 audit: BPF prog-id=178 op=LOAD Jan 24 00:29:28.299000 audit: BPF prog-id=179 op=LOAD Jan 24 00:29:28.299000 audit[3378]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3252 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:28.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437363934356164316162393461373331316562653563643230316138 Jan 24 00:29:28.299000 audit: BPF prog-id=179 op=UNLOAD Jan 24 00:29:28.299000 audit[3378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:28.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437363934356164316162393461373331316562653563643230316138 Jan 24 00:29:28.299000 audit: BPF prog-id=180 op=LOAD Jan 24 00:29:28.299000 audit[3378]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3252 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:28.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437363934356164316162393461373331316562653563643230316138 Jan 24 00:29:28.299000 audit: BPF prog-id=181 op=LOAD Jan 24 00:29:28.299000 audit[3378]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3252 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:28.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437363934356164316162393461373331316562653563643230316138 Jan 24 00:29:28.300000 audit: BPF prog-id=181 op=UNLOAD Jan 24 00:29:28.300000 audit[3378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:28.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437363934356164316162393461373331316562653563643230316138 Jan 24 00:29:28.300000 audit: BPF prog-id=180 op=UNLOAD Jan 24 00:29:28.300000 audit[3378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:28.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437363934356164316162393461373331316562653563643230316138 Jan 24 00:29:28.300000 audit: BPF prog-id=182 op=LOAD Jan 24 00:29:28.300000 audit[3378]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3252 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:28.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437363934356164316162393461373331316562653563643230316138 Jan 24 00:29:28.324028 kubelet[2775]: E0124 00:29:28.323943 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:29:28.358101 containerd[1583]: time="2026-01-24T00:29:28.358021491Z" level=info msg="StartContainer for \"d76945ad1ab94a7311ebe5cd201a84cc5c3322a306be1abbf0ce8f8d401b6070\" returns successfully" Jan 24 00:29:28.415730 kubelet[2775]: E0124 00:29:28.415699 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:28.467295 kubelet[2775]: E0124 00:29:28.467246 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.467295 kubelet[2775]: W0124 00:29:28.467269 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.467295 kubelet[2775]: E0124 00:29:28.467288 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.468513 kubelet[2775]: E0124 00:29:28.468369 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.468513 kubelet[2775]: W0124 00:29:28.468381 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.468513 kubelet[2775]: E0124 00:29:28.468395 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.469722 kubelet[2775]: E0124 00:29:28.469700 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.469722 kubelet[2775]: W0124 00:29:28.469717 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.469829 kubelet[2775]: E0124 00:29:28.469727 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.470291 kubelet[2775]: E0124 00:29:28.470250 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.470291 kubelet[2775]: W0124 00:29:28.470270 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.470291 kubelet[2775]: E0124 00:29:28.470283 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.470543 kubelet[2775]: E0124 00:29:28.470518 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.470800 kubelet[2775]: W0124 00:29:28.470778 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.470927 kubelet[2775]: E0124 00:29:28.470815 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.471340 kubelet[2775]: E0124 00:29:28.471309 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.471340 kubelet[2775]: W0124 00:29:28.471324 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.471340 kubelet[2775]: E0124 00:29:28.471334 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.471744 kubelet[2775]: E0124 00:29:28.471627 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.471744 kubelet[2775]: W0124 00:29:28.471642 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.471744 kubelet[2775]: E0124 00:29:28.471682 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.472006 kubelet[2775]: E0124 00:29:28.471898 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.472006 kubelet[2775]: W0124 00:29:28.471908 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.472006 kubelet[2775]: E0124 00:29:28.471916 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.472753 kubelet[2775]: E0124 00:29:28.472729 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.472753 kubelet[2775]: W0124 00:29:28.472747 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.472753 kubelet[2775]: E0124 00:29:28.472756 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.473051 kubelet[2775]: E0124 00:29:28.472980 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.473051 kubelet[2775]: W0124 00:29:28.472997 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.473051 kubelet[2775]: E0124 00:29:28.473006 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.474018 kubelet[2775]: E0124 00:29:28.473994 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.474018 kubelet[2775]: W0124 00:29:28.474014 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.474018 kubelet[2775]: E0124 00:29:28.474024 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.474760 kubelet[2775]: E0124 00:29:28.474737 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.474760 kubelet[2775]: W0124 00:29:28.474753 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.474760 kubelet[2775]: E0124 00:29:28.474762 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.475893 kubelet[2775]: E0124 00:29:28.475853 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.475893 kubelet[2775]: W0124 00:29:28.475870 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.475893 kubelet[2775]: E0124 00:29:28.475880 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.476095 kubelet[2775]: E0124 00:29:28.476073 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.476095 kubelet[2775]: W0124 00:29:28.476088 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.476095 kubelet[2775]: E0124 00:29:28.476096 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.476314 kubelet[2775]: E0124 00:29:28.476290 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.476314 kubelet[2775]: W0124 00:29:28.476310 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.476371 kubelet[2775]: E0124 00:29:28.476319 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.476815 kubelet[2775]: E0124 00:29:28.476789 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.476815 kubelet[2775]: W0124 00:29:28.476810 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.476874 kubelet[2775]: E0124 00:29:28.476821 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.477254 kubelet[2775]: E0124 00:29:28.477223 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.477254 kubelet[2775]: W0124 00:29:28.477237 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.477464 kubelet[2775]: E0124 00:29:28.477429 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.478432 kubelet[2775]: E0124 00:29:28.477773 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.478432 kubelet[2775]: W0124 00:29:28.477789 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.478432 kubelet[2775]: E0124 00:29:28.477871 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.478432 kubelet[2775]: E0124 00:29:28.478220 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.478432 kubelet[2775]: W0124 00:29:28.478228 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.478432 kubelet[2775]: E0124 00:29:28.478275 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.478756 kubelet[2775]: E0124 00:29:28.478734 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.478756 kubelet[2775]: W0124 00:29:28.478749 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.478829 kubelet[2775]: E0124 00:29:28.478762 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.479032 kubelet[2775]: E0124 00:29:28.479006 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.479032 kubelet[2775]: W0124 00:29:28.479026 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.479145 kubelet[2775]: E0124 00:29:28.479112 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.479674 kubelet[2775]: E0124 00:29:28.479631 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.479721 kubelet[2775]: W0124 00:29:28.479674 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.479891 kubelet[2775]: E0124 00:29:28.479866 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.480229 kubelet[2775]: E0124 00:29:28.480206 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.480229 kubelet[2775]: W0124 00:29:28.480224 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.480776 kubelet[2775]: E0124 00:29:28.480698 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.480837 kubelet[2775]: E0124 00:29:28.480811 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.480865 kubelet[2775]: W0124 00:29:28.480846 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.480891 kubelet[2775]: E0124 00:29:28.480871 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.481392 kubelet[2775]: E0124 00:29:28.481364 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.481392 kubelet[2775]: W0124 00:29:28.481382 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.481460 kubelet[2775]: E0124 00:29:28.481431 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.481742 kubelet[2775]: E0124 00:29:28.481719 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.481742 kubelet[2775]: W0124 00:29:28.481736 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.482592 kubelet[2775]: E0124 00:29:28.482529 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.482897 kubelet[2775]: E0124 00:29:28.482874 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.482897 kubelet[2775]: W0124 00:29:28.482891 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.482963 kubelet[2775]: E0124 00:29:28.482905 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.483171 kubelet[2775]: E0124 00:29:28.483147 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.483171 kubelet[2775]: W0124 00:29:28.483163 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.483234 kubelet[2775]: E0124 00:29:28.483185 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.483422 kubelet[2775]: E0124 00:29:28.483401 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.483422 kubelet[2775]: W0124 00:29:28.483415 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.483484 kubelet[2775]: E0124 00:29:28.483428 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.483968 kubelet[2775]: E0124 00:29:28.483943 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.483968 kubelet[2775]: W0124 00:29:28.483958 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.484176 kubelet[2775]: E0124 00:29:28.484152 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.484722 kubelet[2775]: E0124 00:29:28.484699 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.484722 kubelet[2775]: W0124 00:29:28.484717 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.484793 kubelet[2775]: E0124 00:29:28.484729 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.484972 kubelet[2775]: E0124 00:29:28.484928 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.484972 kubelet[2775]: W0124 00:29:28.484939 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.484972 kubelet[2775]: E0124 00:29:28.484947 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.486193 kubelet[2775]: E0124 00:29:28.486169 2775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:29:28.486193 kubelet[2775]: W0124 00:29:28.486189 2775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:29:28.486377 kubelet[2775]: E0124 00:29:28.486204 2775 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:29:28.862610 containerd[1583]: time="2026-01-24T00:29:28.862560748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:28.863502 containerd[1583]: time="2026-01-24T00:29:28.863249028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:28.864118 containerd[1583]: time="2026-01-24T00:29:28.864085638Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:28.865774 containerd[1583]: time="2026-01-24T00:29:28.865734337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:28.866334 containerd[1583]: time="2026-01-24T00:29:28.866300726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 635.390072ms" Jan 24 00:29:28.866393 containerd[1583]: time="2026-01-24T00:29:28.866333266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:29:28.869460 containerd[1583]: time="2026-01-24T00:29:28.869420905Z" level=info msg="CreateContainer within sandbox \"cbaa34f48e8ee4ed513f65f33e5e3631c3249ead7ae5fd57d596aa9bf76aaacf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:29:28.882467 containerd[1583]: time="2026-01-24T00:29:28.882206369Z" level=info msg="Container 1026a3a7f5075d04cbe9dc714e49fe49980bbfcab8958a10f4ee04b9f46e7c16: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:29:28.891263 containerd[1583]: time="2026-01-24T00:29:28.891220604Z" level=info msg="CreateContainer within sandbox \"cbaa34f48e8ee4ed513f65f33e5e3631c3249ead7ae5fd57d596aa9bf76aaacf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1026a3a7f5075d04cbe9dc714e49fe49980bbfcab8958a10f4ee04b9f46e7c16\"" Jan 24 00:29:28.892131 containerd[1583]: time="2026-01-24T00:29:28.891859394Z" level=info msg="StartContainer for \"1026a3a7f5075d04cbe9dc714e49fe49980bbfcab8958a10f4ee04b9f46e7c16\"" Jan 24 00:29:28.894101 containerd[1583]: time="2026-01-24T00:29:28.893709663Z" level=info msg="connecting to shim 1026a3a7f5075d04cbe9dc714e49fe49980bbfcab8958a10f4ee04b9f46e7c16" address="unix:///run/containerd/s/6a32d035b50fde913c7b542e4e3f46bbba69fab302c4e9b7715a2a62f6370818" protocol=ttrpc version=3 Jan 24 00:29:28.927324 systemd[1]: Started cri-containerd-1026a3a7f5075d04cbe9dc714e49fe49980bbfcab8958a10f4ee04b9f46e7c16.scope - libcontainer container 1026a3a7f5075d04cbe9dc714e49fe49980bbfcab8958a10f4ee04b9f46e7c16. Jan 24 00:29:28.993000 audit: BPF prog-id=183 op=LOAD Jan 24 00:29:28.993000 audit[3453]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3330 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:28.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130323661336137663530373564303463626539646337313465343966 Jan 24 00:29:28.993000 audit: BPF prog-id=184 op=LOAD Jan 24 00:29:28.993000 audit[3453]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3330 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:28.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130323661336137663530373564303463626539646337313465343966 Jan 24 00:29:28.993000 audit: BPF prog-id=184 op=UNLOAD Jan 24 00:29:28.993000 audit[3453]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:28.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130323661336137663530373564303463626539646337313465343966 Jan 24 00:29:28.993000 audit: BPF prog-id=183 op=UNLOAD Jan 24 00:29:28.993000 audit[3453]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:28.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130323661336137663530373564303463626539646337313465343966 Jan 24 00:29:28.994000 audit: BPF prog-id=185 op=LOAD Jan 24 00:29:28.994000 audit[3453]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3330 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:28.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130323661336137663530373564303463626539646337313465343966 Jan 24 00:29:29.015840 containerd[1583]: time="2026-01-24T00:29:29.015792532Z" level=info msg="StartContainer for \"1026a3a7f5075d04cbe9dc714e49fe49980bbfcab8958a10f4ee04b9f46e7c16\" returns successfully" Jan 24 00:29:29.036900 systemd[1]: cri-containerd-1026a3a7f5075d04cbe9dc714e49fe49980bbfcab8958a10f4ee04b9f46e7c16.scope: Deactivated successfully. Jan 24 00:29:29.039000 audit: BPF prog-id=185 op=UNLOAD Jan 24 00:29:29.041429 containerd[1583]: time="2026-01-24T00:29:29.041394179Z" level=info msg="received container exit event container_id:\"1026a3a7f5075d04cbe9dc714e49fe49980bbfcab8958a10f4ee04b9f46e7c16\" id:\"1026a3a7f5075d04cbe9dc714e49fe49980bbfcab8958a10f4ee04b9f46e7c16\" pid:3466 exited_at:{seconds:1769214569 nanos:40431099}" Jan 24 00:29:29.067960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1026a3a7f5075d04cbe9dc714e49fe49980bbfcab8958a10f4ee04b9f46e7c16-rootfs.mount: Deactivated successfully. Jan 24 00:29:29.418296 kubelet[2775]: I0124 00:29:29.418199 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:29:29.418799 kubelet[2775]: E0124 00:29:29.418461 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:29.418799 kubelet[2775]: E0124 00:29:29.418694 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:29.419988 containerd[1583]: time="2026-01-24T00:29:29.419958120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:29:29.434595 kubelet[2775]: I0124 00:29:29.433733 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d8c478c95-r4hcl" podStartSLOduration=2.233032854 podStartE2EDuration="3.433721173s" podCreationTimestamp="2026-01-24 00:29:26 +0000 UTC" firstStartedPulling="2026-01-24 00:29:27.029853955 +0000 UTC m=+20.793166142" lastFinishedPulling="2026-01-24 00:29:28.230542274 +0000 UTC m=+21.993854461" observedRunningTime="2026-01-24 00:29:28.428903805 +0000 UTC m=+22.192216002" watchObservedRunningTime="2026-01-24 00:29:29.433721173 +0000 UTC m=+23.197033360" Jan 24 00:29:30.327597 kubelet[2775]: E0124 00:29:30.327565 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:29:31.121330 containerd[1583]: time="2026-01-24T00:29:31.120780519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:31.122333 containerd[1583]: time="2026-01-24T00:29:31.121712428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:31.122917 containerd[1583]: time="2026-01-24T00:29:31.122607188Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:31.127484 containerd[1583]: time="2026-01-24T00:29:31.126115306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:31.127484 containerd[1583]: time="2026-01-24T00:29:31.127313156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.707321216s" Jan 24 00:29:31.127484 containerd[1583]: time="2026-01-24T00:29:31.127333956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:29:31.131026 containerd[1583]: time="2026-01-24T00:29:31.130995034Z" level=info msg="CreateContainer within sandbox \"cbaa34f48e8ee4ed513f65f33e5e3631c3249ead7ae5fd57d596aa9bf76aaacf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:29:31.141553 containerd[1583]: time="2026-01-24T00:29:31.139811229Z" level=info msg="Container 579ec1be528bac8683aeaa6a448b2195995e200f421f3e92d3dd216834fba010: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:29:31.149559 containerd[1583]: time="2026-01-24T00:29:31.149525395Z" level=info msg="CreateContainer within sandbox \"cbaa34f48e8ee4ed513f65f33e5e3631c3249ead7ae5fd57d596aa9bf76aaacf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"579ec1be528bac8683aeaa6a448b2195995e200f421f3e92d3dd216834fba010\"" Jan 24 00:29:31.150098 containerd[1583]: time="2026-01-24T00:29:31.149964814Z" level=info msg="StartContainer for \"579ec1be528bac8683aeaa6a448b2195995e200f421f3e92d3dd216834fba010\"" Jan 24 00:29:31.152240 containerd[1583]: time="2026-01-24T00:29:31.152216913Z" level=info msg="connecting to shim 579ec1be528bac8683aeaa6a448b2195995e200f421f3e92d3dd216834fba010" address="unix:///run/containerd/s/6a32d035b50fde913c7b542e4e3f46bbba69fab302c4e9b7715a2a62f6370818" protocol=ttrpc version=3 Jan 24 00:29:31.176815 systemd[1]: Started cri-containerd-579ec1be528bac8683aeaa6a448b2195995e200f421f3e92d3dd216834fba010.scope - libcontainer container 579ec1be528bac8683aeaa6a448b2195995e200f421f3e92d3dd216834fba010. Jan 24 00:29:31.237000 audit: BPF prog-id=186 op=LOAD Jan 24 00:29:31.240668 kernel: kauditd_printk_skb: 96 callbacks suppressed Jan 24 00:29:31.240735 kernel: audit: type=1334 audit(1769214571.237:590): prog-id=186 op=LOAD Jan 24 00:29:31.237000 audit[3511]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3330 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:31.250734 kernel: audit: type=1300 audit(1769214571.237:590): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3330 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:31.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396563316265353238626163383638336165616136613434386232 Jan 24 00:29:31.263749 kernel: audit: type=1327 audit(1769214571.237:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396563316265353238626163383638336165616136613434386232 Jan 24 00:29:31.263834 kernel: audit: type=1334 audit(1769214571.237:591): prog-id=187 op=LOAD Jan 24 00:29:31.237000 audit: BPF prog-id=187 op=LOAD Jan 24 00:29:31.237000 audit[3511]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3330 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:31.273764 kernel: audit: type=1300 audit(1769214571.237:591): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3330 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:31.287673 kernel: audit: type=1327 audit(1769214571.237:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396563316265353238626163383638336165616136613434386232 Jan 24 00:29:31.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396563316265353238626163383638336165616136613434386232 Jan 24 00:29:31.290709 kernel: audit: type=1334 audit(1769214571.237:592): prog-id=187 op=UNLOAD Jan 24 00:29:31.237000 audit: BPF prog-id=187 op=UNLOAD Jan 24 00:29:31.237000 audit[3511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:31.306032 kernel: audit: type=1300 audit(1769214571.237:592): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:31.306086 kernel: audit: type=1327 audit(1769214571.237:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396563316265353238626163383638336165616136613434386232 Jan 24 00:29:31.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396563316265353238626163383638336165616136613434386232 Jan 24 00:29:31.237000 audit: BPF prog-id=186 op=UNLOAD Jan 24 00:29:31.310903 kernel: audit: type=1334 audit(1769214571.237:593): prog-id=186 op=UNLOAD Jan 24 00:29:31.315117 containerd[1583]: time="2026-01-24T00:29:31.315046052Z" level=info msg="StartContainer for \"579ec1be528bac8683aeaa6a448b2195995e200f421f3e92d3dd216834fba010\" returns successfully" Jan 24 00:29:31.237000 audit[3511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:31.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396563316265353238626163383638336165616136613434386232 Jan 24 00:29:31.237000 audit: BPF prog-id=188 op=LOAD Jan 24 00:29:31.237000 audit[3511]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3330 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:31.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537396563316265353238626163383638336165616136613434386232 Jan 24 00:29:31.428468 kubelet[2775]: E0124 00:29:31.428311 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:31.882360 containerd[1583]: time="2026-01-24T00:29:31.882134738Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Jan 24 00:29:31.885754 systemd[1]: cri-containerd-579ec1be528bac8683aeaa6a448b2195995e200f421f3e92d3dd216834fba010.scope: Deactivated successfully. Jan 24 00:29:31.886389 systemd[1]: cri-containerd-579ec1be528bac8683aeaa6a448b2195995e200f421f3e92d3dd216834fba010.scope: Consumed 531ms CPU time, 196.6M memory peak, 171.3M written to disk. Jan 24 00:29:31.887000 audit: BPF prog-id=188 op=UNLOAD Jan 24 00:29:31.889353 containerd[1583]: time="2026-01-24T00:29:31.889301855Z" level=info msg="received container exit event container_id:\"579ec1be528bac8683aeaa6a448b2195995e200f421f3e92d3dd216834fba010\" id:\"579ec1be528bac8683aeaa6a448b2195995e200f421f3e92d3dd216834fba010\" pid:3524 exited_at:{seconds:1769214571 nanos:888232365}" Jan 24 00:29:31.910876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-579ec1be528bac8683aeaa6a448b2195995e200f421f3e92d3dd216834fba010-rootfs.mount: Deactivated successfully. Jan 24 00:29:31.977287 kubelet[2775]: I0124 00:29:31.977255 2775 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:29:32.041886 systemd[1]: Created slice kubepods-burstable-podf51ff3e1_9f79_414a_8962_e8fa83c40e67.slice - libcontainer container kubepods-burstable-podf51ff3e1_9f79_414a_8962_e8fa83c40e67.slice. Jan 24 00:29:32.060170 systemd[1]: Created slice kubepods-besteffort-pod1fd846d3_1f7b_431f_b783_0e9765cbef2a.slice - libcontainer container kubepods-besteffort-pod1fd846d3_1f7b_431f_b783_0e9765cbef2a.slice. Jan 24 00:29:32.073720 systemd[1]: Created slice kubepods-besteffort-pod2078a205_dd8a_4be4_86c7_d47960d16fb3.slice - libcontainer container kubepods-besteffort-pod2078a205_dd8a_4be4_86c7_d47960d16fb3.slice. Jan 24 00:29:32.082816 systemd[1]: Created slice kubepods-besteffort-podcc3d3be5_6ea1_456f_b2a1_4ebadd452d7b.slice - libcontainer container kubepods-besteffort-podcc3d3be5_6ea1_456f_b2a1_4ebadd452d7b.slice. Jan 24 00:29:32.088405 systemd[1]: Created slice kubepods-besteffort-pod54bcd514_60de_4dea_9d5b_89d698a7baf9.slice - libcontainer container kubepods-besteffort-pod54bcd514_60de_4dea_9d5b_89d698a7baf9.slice. Jan 24 00:29:32.096109 systemd[1]: Created slice kubepods-burstable-podd85e97f3_257f_4333_95c1_a12671983d81.slice - libcontainer container kubepods-burstable-podd85e97f3_257f_4333_95c1_a12671983d81.slice. Jan 24 00:29:32.103064 systemd[1]: Created slice kubepods-besteffort-podfe31be8c_c3f6_4683_9bda_2dbdf7783a1a.slice - libcontainer container kubepods-besteffort-podfe31be8c_c3f6_4683_9bda_2dbdf7783a1a.slice. Jan 24 00:29:32.112463 kubelet[2775]: I0124 00:29:32.112402 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54bcd514-60de-4dea-9d5b-89d698a7baf9-whisker-ca-bundle\") pod \"whisker-67456c7956-ktm76\" (UID: \"54bcd514-60de-4dea-9d5b-89d698a7baf9\") " pod="calico-system/whisker-67456c7956-ktm76" Jan 24 00:29:32.112564 kubelet[2775]: I0124 00:29:32.112471 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-248dp\" (UniqueName: \"kubernetes.io/projected/54bcd514-60de-4dea-9d5b-89d698a7baf9-kube-api-access-248dp\") pod \"whisker-67456c7956-ktm76\" (UID: \"54bcd514-60de-4dea-9d5b-89d698a7baf9\") " pod="calico-system/whisker-67456c7956-ktm76" Jan 24 00:29:32.112564 kubelet[2775]: I0124 00:29:32.112554 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fe31be8c-c3f6-4683-9bda-2dbdf7783a1a-goldmane-key-pair\") pod \"goldmane-666569f655-b2qbk\" (UID: \"fe31be8c-c3f6-4683-9bda-2dbdf7783a1a\") " pod="calico-system/goldmane-666569f655-b2qbk" Jan 24 00:29:32.112636 kubelet[2775]: I0124 00:29:32.112575 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54bcd514-60de-4dea-9d5b-89d698a7baf9-whisker-backend-key-pair\") pod \"whisker-67456c7956-ktm76\" (UID: \"54bcd514-60de-4dea-9d5b-89d698a7baf9\") " pod="calico-system/whisker-67456c7956-ktm76" Jan 24 00:29:32.113217 kubelet[2775]: I0124 00:29:32.112636 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f51ff3e1-9f79-414a-8962-e8fa83c40e67-config-volume\") pod \"coredns-668d6bf9bc-ds6l9\" (UID: \"f51ff3e1-9f79-414a-8962-e8fa83c40e67\") " pod="kube-system/coredns-668d6bf9bc-ds6l9" Jan 24 00:29:32.113217 kubelet[2775]: I0124 00:29:32.112784 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brdqh\" (UniqueName: \"kubernetes.io/projected/f51ff3e1-9f79-414a-8962-e8fa83c40e67-kube-api-access-brdqh\") pod \"coredns-668d6bf9bc-ds6l9\" (UID: \"f51ff3e1-9f79-414a-8962-e8fa83c40e67\") " pod="kube-system/coredns-668d6bf9bc-ds6l9" Jan 24 00:29:32.113217 kubelet[2775]: I0124 00:29:32.112802 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe31be8c-c3f6-4683-9bda-2dbdf7783a1a-config\") pod \"goldmane-666569f655-b2qbk\" (UID: \"fe31be8c-c3f6-4683-9bda-2dbdf7783a1a\") " pod="calico-system/goldmane-666569f655-b2qbk" Jan 24 00:29:32.113217 kubelet[2775]: I0124 00:29:32.112819 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95p6g\" (UniqueName: \"kubernetes.io/projected/2078a205-dd8a-4be4-86c7-d47960d16fb3-kube-api-access-95p6g\") pod \"calico-apiserver-569b7b4d59-8g26m\" (UID: \"2078a205-dd8a-4be4-86c7-d47960d16fb3\") " pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" Jan 24 00:29:32.113217 kubelet[2775]: I0124 00:29:32.112871 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b-calico-apiserver-certs\") pod \"calico-apiserver-569b7b4d59-5k5g4\" (UID: \"cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b\") " pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" Jan 24 00:29:32.113919 kubelet[2775]: I0124 00:29:32.112888 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s5cr\" (UniqueName: \"kubernetes.io/projected/cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b-kube-api-access-6s5cr\") pod \"calico-apiserver-569b7b4d59-5k5g4\" (UID: \"cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b\") " pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" Jan 24 00:29:32.113919 kubelet[2775]: I0124 00:29:32.112903 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe31be8c-c3f6-4683-9bda-2dbdf7783a1a-goldmane-ca-bundle\") pod \"goldmane-666569f655-b2qbk\" (UID: \"fe31be8c-c3f6-4683-9bda-2dbdf7783a1a\") " pod="calico-system/goldmane-666569f655-b2qbk" Jan 24 00:29:32.113919 kubelet[2775]: I0124 00:29:32.112974 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5rh6\" (UniqueName: \"kubernetes.io/projected/fe31be8c-c3f6-4683-9bda-2dbdf7783a1a-kube-api-access-q5rh6\") pod \"goldmane-666569f655-b2qbk\" (UID: \"fe31be8c-c3f6-4683-9bda-2dbdf7783a1a\") " pod="calico-system/goldmane-666569f655-b2qbk" Jan 24 00:29:32.113919 kubelet[2775]: I0124 00:29:32.112990 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d85e97f3-257f-4333-95c1-a12671983d81-config-volume\") pod \"coredns-668d6bf9bc-jw5xc\" (UID: \"d85e97f3-257f-4333-95c1-a12671983d81\") " pod="kube-system/coredns-668d6bf9bc-jw5xc" Jan 24 00:29:32.113919 kubelet[2775]: I0124 00:29:32.113043 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jl6r\" (UniqueName: \"kubernetes.io/projected/1fd846d3-1f7b-431f-b783-0e9765cbef2a-kube-api-access-8jl6r\") pod \"calico-kube-controllers-547797c6d5-k28bz\" (UID: \"1fd846d3-1f7b-431f-b783-0e9765cbef2a\") " pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" Jan 24 00:29:32.114056 kubelet[2775]: I0124 00:29:32.113062 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrqcq\" (UniqueName: \"kubernetes.io/projected/d85e97f3-257f-4333-95c1-a12671983d81-kube-api-access-rrqcq\") pod \"coredns-668d6bf9bc-jw5xc\" (UID: \"d85e97f3-257f-4333-95c1-a12671983d81\") " pod="kube-system/coredns-668d6bf9bc-jw5xc" Jan 24 00:29:32.114056 kubelet[2775]: I0124 00:29:32.113118 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2078a205-dd8a-4be4-86c7-d47960d16fb3-calico-apiserver-certs\") pod \"calico-apiserver-569b7b4d59-8g26m\" (UID: \"2078a205-dd8a-4be4-86c7-d47960d16fb3\") " pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" Jan 24 00:29:32.114056 kubelet[2775]: I0124 00:29:32.113134 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fd846d3-1f7b-431f-b783-0e9765cbef2a-tigera-ca-bundle\") pod \"calico-kube-controllers-547797c6d5-k28bz\" (UID: \"1fd846d3-1f7b-431f-b783-0e9765cbef2a\") " pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" Jan 24 00:29:32.331136 systemd[1]: Created slice kubepods-besteffort-pod9b58f3e4_7a56_472e_b647_710386b89b86.slice - libcontainer container kubepods-besteffort-pod9b58f3e4_7a56_472e_b647_710386b89b86.slice. Jan 24 00:29:32.335635 containerd[1583]: time="2026-01-24T00:29:32.335268762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dzs24,Uid:9b58f3e4-7a56-472e-b647-710386b89b86,Namespace:calico-system,Attempt:0,}" Jan 24 00:29:32.353918 kubelet[2775]: E0124 00:29:32.353888 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:32.354584 containerd[1583]: time="2026-01-24T00:29:32.354528252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ds6l9,Uid:f51ff3e1-9f79-414a-8962-e8fa83c40e67,Namespace:kube-system,Attempt:0,}" Jan 24 00:29:32.367868 containerd[1583]: time="2026-01-24T00:29:32.367635275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547797c6d5-k28bz,Uid:1fd846d3-1f7b-431f-b783-0e9765cbef2a,Namespace:calico-system,Attempt:0,}" Jan 24 00:29:32.381779 containerd[1583]: time="2026-01-24T00:29:32.381739468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569b7b4d59-8g26m,Uid:2078a205-dd8a-4be4-86c7-d47960d16fb3,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:29:32.387516 containerd[1583]: time="2026-01-24T00:29:32.387238236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569b7b4d59-5k5g4,Uid:cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:29:32.392238 containerd[1583]: time="2026-01-24T00:29:32.392209383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67456c7956-ktm76,Uid:54bcd514-60de-4dea-9d5b-89d698a7baf9,Namespace:calico-system,Attempt:0,}" Jan 24 00:29:32.399615 kubelet[2775]: E0124 00:29:32.399590 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:32.401898 containerd[1583]: time="2026-01-24T00:29:32.401640318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw5xc,Uid:d85e97f3-257f-4333-95c1-a12671983d81,Namespace:kube-system,Attempt:0,}" Jan 24 00:29:32.406477 containerd[1583]: time="2026-01-24T00:29:32.406454426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-b2qbk,Uid:fe31be8c-c3f6-4683-9bda-2dbdf7783a1a,Namespace:calico-system,Attempt:0,}" Jan 24 00:29:32.437694 kubelet[2775]: E0124 00:29:32.437620 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:32.440463 containerd[1583]: time="2026-01-24T00:29:32.440434159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:29:32.525173 containerd[1583]: time="2026-01-24T00:29:32.525133217Z" level=error msg="Failed to destroy network for sandbox \"e9d1d486b91c5381a73498f444da2bf3b697e129eff9d3f9b7166445eb650fdd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.532024 containerd[1583]: time="2026-01-24T00:29:32.531696853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dzs24,Uid:9b58f3e4-7a56-472e-b647-710386b89b86,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9d1d486b91c5381a73498f444da2bf3b697e129eff9d3f9b7166445eb650fdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.532158 kubelet[2775]: E0124 00:29:32.532005 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9d1d486b91c5381a73498f444da2bf3b697e129eff9d3f9b7166445eb650fdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.532158 kubelet[2775]: E0124 00:29:32.532086 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9d1d486b91c5381a73498f444da2bf3b697e129eff9d3f9b7166445eb650fdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dzs24" Jan 24 00:29:32.532158 kubelet[2775]: E0124 00:29:32.532146 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9d1d486b91c5381a73498f444da2bf3b697e129eff9d3f9b7166445eb650fdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dzs24" Jan 24 00:29:32.532293 kubelet[2775]: E0124 00:29:32.532189 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dzs24_calico-system(9b58f3e4-7a56-472e-b647-710386b89b86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dzs24_calico-system(9b58f3e4-7a56-472e-b647-710386b89b86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9d1d486b91c5381a73498f444da2bf3b697e129eff9d3f9b7166445eb650fdd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:29:32.559707 containerd[1583]: time="2026-01-24T00:29:32.559565059Z" level=error msg="Failed to destroy network for sandbox \"1f3451fc6da8122838c3edec81764fc00830aa4a8ddb9f7a2be170ef6af1dc6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.562126 containerd[1583]: time="2026-01-24T00:29:32.562067048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ds6l9,Uid:f51ff3e1-9f79-414a-8962-e8fa83c40e67,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f3451fc6da8122838c3edec81764fc00830aa4a8ddb9f7a2be170ef6af1dc6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.562481 kubelet[2775]: E0124 00:29:32.562443 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f3451fc6da8122838c3edec81764fc00830aa4a8ddb9f7a2be170ef6af1dc6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.562535 kubelet[2775]: E0124 00:29:32.562497 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f3451fc6da8122838c3edec81764fc00830aa4a8ddb9f7a2be170ef6af1dc6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ds6l9" Jan 24 00:29:32.562564 kubelet[2775]: E0124 00:29:32.562528 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f3451fc6da8122838c3edec81764fc00830aa4a8ddb9f7a2be170ef6af1dc6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ds6l9" Jan 24 00:29:32.562630 kubelet[2775]: E0124 00:29:32.562587 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ds6l9_kube-system(f51ff3e1-9f79-414a-8962-e8fa83c40e67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ds6l9_kube-system(f51ff3e1-9f79-414a-8962-e8fa83c40e67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f3451fc6da8122838c3edec81764fc00830aa4a8ddb9f7a2be170ef6af1dc6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ds6l9" podUID="f51ff3e1-9f79-414a-8962-e8fa83c40e67" Jan 24 00:29:32.571876 containerd[1583]: time="2026-01-24T00:29:32.571844433Z" level=error msg="Failed to destroy network for sandbox \"82356a4d218a46456ffa8313d21478e286cc43b3ed6374a55683424097eb8428\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.574857 containerd[1583]: time="2026-01-24T00:29:32.574820332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547797c6d5-k28bz,Uid:1fd846d3-1f7b-431f-b783-0e9765cbef2a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82356a4d218a46456ffa8313d21478e286cc43b3ed6374a55683424097eb8428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.575021 kubelet[2775]: E0124 00:29:32.574985 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82356a4d218a46456ffa8313d21478e286cc43b3ed6374a55683424097eb8428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.575095 kubelet[2775]: E0124 00:29:32.575031 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82356a4d218a46456ffa8313d21478e286cc43b3ed6374a55683424097eb8428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" Jan 24 00:29:32.575095 kubelet[2775]: E0124 00:29:32.575048 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82356a4d218a46456ffa8313d21478e286cc43b3ed6374a55683424097eb8428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" Jan 24 00:29:32.575161 kubelet[2775]: E0124 00:29:32.575087 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-547797c6d5-k28bz_calico-system(1fd846d3-1f7b-431f-b783-0e9765cbef2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-547797c6d5-k28bz_calico-system(1fd846d3-1f7b-431f-b783-0e9765cbef2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82356a4d218a46456ffa8313d21478e286cc43b3ed6374a55683424097eb8428\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:29:32.607431 containerd[1583]: time="2026-01-24T00:29:32.606315106Z" level=error msg="Failed to destroy network for sandbox \"c7d3834c634328634ade4d8299570d58cd8699c927f46c9ec565432968d4c5eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.611165 containerd[1583]: time="2026-01-24T00:29:32.611035734Z" level=error msg="Failed to destroy network for sandbox \"c40ce5bbad3b9f277a9a50db18eaaab5b6a8dcddd75c7797ace2e1f2c1b48913\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.613532 containerd[1583]: time="2026-01-24T00:29:32.613503562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569b7b4d59-5k5g4,Uid:cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40ce5bbad3b9f277a9a50db18eaaab5b6a8dcddd75c7797ace2e1f2c1b48913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.614466 kubelet[2775]: E0124 00:29:32.614407 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40ce5bbad3b9f277a9a50db18eaaab5b6a8dcddd75c7797ace2e1f2c1b48913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.615222 containerd[1583]: time="2026-01-24T00:29:32.615043162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569b7b4d59-8g26m,Uid:2078a205-dd8a-4be4-86c7-d47960d16fb3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7d3834c634328634ade4d8299570d58cd8699c927f46c9ec565432968d4c5eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.615397 kubelet[2775]: E0124 00:29:32.615351 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40ce5bbad3b9f277a9a50db18eaaab5b6a8dcddd75c7797ace2e1f2c1b48913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" Jan 24 00:29:32.615744 kubelet[2775]: E0124 00:29:32.615166 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7d3834c634328634ade4d8299570d58cd8699c927f46c9ec565432968d4c5eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.615744 kubelet[2775]: E0124 00:29:32.615468 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7d3834c634328634ade4d8299570d58cd8699c927f46c9ec565432968d4c5eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" Jan 24 00:29:32.615744 kubelet[2775]: E0124 00:29:32.615483 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7d3834c634328634ade4d8299570d58cd8699c927f46c9ec565432968d4c5eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" Jan 24 00:29:32.616120 kubelet[2775]: E0124 00:29:32.615520 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-569b7b4d59-8g26m_calico-apiserver(2078a205-dd8a-4be4-86c7-d47960d16fb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-569b7b4d59-8g26m_calico-apiserver(2078a205-dd8a-4be4-86c7-d47960d16fb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7d3834c634328634ade4d8299570d58cd8699c927f46c9ec565432968d4c5eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:29:32.616277 kubelet[2775]: E0124 00:29:32.615380 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40ce5bbad3b9f277a9a50db18eaaab5b6a8dcddd75c7797ace2e1f2c1b48913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" Jan 24 00:29:32.616277 kubelet[2775]: E0124 00:29:32.616196 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-569b7b4d59-5k5g4_calico-apiserver(cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-569b7b4d59-5k5g4_calico-apiserver(cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c40ce5bbad3b9f277a9a50db18eaaab5b6a8dcddd75c7797ace2e1f2c1b48913\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:29:32.619182 containerd[1583]: time="2026-01-24T00:29:32.619157980Z" level=error msg="Failed to destroy network for sandbox \"5efd5995b55a7d329b3206a0f2b8a3ea858b8bc82e238a1e8e9e979b00a3a451\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.621216 containerd[1583]: time="2026-01-24T00:29:32.621190849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67456c7956-ktm76,Uid:54bcd514-60de-4dea-9d5b-89d698a7baf9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5efd5995b55a7d329b3206a0f2b8a3ea858b8bc82e238a1e8e9e979b00a3a451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.621515 kubelet[2775]: E0124 00:29:32.621461 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5efd5995b55a7d329b3206a0f2b8a3ea858b8bc82e238a1e8e9e979b00a3a451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.621707 kubelet[2775]: E0124 00:29:32.621612 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5efd5995b55a7d329b3206a0f2b8a3ea858b8bc82e238a1e8e9e979b00a3a451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67456c7956-ktm76" Jan 24 00:29:32.621833 kubelet[2775]: E0124 00:29:32.621784 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5efd5995b55a7d329b3206a0f2b8a3ea858b8bc82e238a1e8e9e979b00a3a451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67456c7956-ktm76" Jan 24 00:29:32.622142 kubelet[2775]: E0124 00:29:32.622087 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67456c7956-ktm76_calico-system(54bcd514-60de-4dea-9d5b-89d698a7baf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67456c7956-ktm76_calico-system(54bcd514-60de-4dea-9d5b-89d698a7baf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5efd5995b55a7d329b3206a0f2b8a3ea858b8bc82e238a1e8e9e979b00a3a451\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67456c7956-ktm76" podUID="54bcd514-60de-4dea-9d5b-89d698a7baf9" Jan 24 00:29:32.634097 containerd[1583]: time="2026-01-24T00:29:32.634051762Z" level=error msg="Failed to destroy network for sandbox \"690f9023cee45e2fc7158d85f460b9af5775df28f39e65163b5208e33b5f99dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.636238 containerd[1583]: time="2026-01-24T00:29:32.636158771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw5xc,Uid:d85e97f3-257f-4333-95c1-a12671983d81,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"690f9023cee45e2fc7158d85f460b9af5775df28f39e65163b5208e33b5f99dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.636791 kubelet[2775]: E0124 00:29:32.636750 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690f9023cee45e2fc7158d85f460b9af5775df28f39e65163b5208e33b5f99dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.636871 kubelet[2775]: E0124 00:29:32.636806 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690f9023cee45e2fc7158d85f460b9af5775df28f39e65163b5208e33b5f99dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jw5xc" Jan 24 00:29:32.636871 kubelet[2775]: E0124 00:29:32.636827 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690f9023cee45e2fc7158d85f460b9af5775df28f39e65163b5208e33b5f99dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jw5xc" Jan 24 00:29:32.636925 kubelet[2775]: E0124 00:29:32.636871 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jw5xc_kube-system(d85e97f3-257f-4333-95c1-a12671983d81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jw5xc_kube-system(d85e97f3-257f-4333-95c1-a12671983d81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"690f9023cee45e2fc7158d85f460b9af5775df28f39e65163b5208e33b5f99dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jw5xc" podUID="d85e97f3-257f-4333-95c1-a12671983d81" Jan 24 00:29:32.644023 containerd[1583]: time="2026-01-24T00:29:32.643949017Z" level=error msg="Failed to destroy network for sandbox \"31741b845d2fce2b00b305ca03702a6677675000c7211b0422caef74ca6202fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.646116 containerd[1583]: time="2026-01-24T00:29:32.646071526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-b2qbk,Uid:fe31be8c-c3f6-4683-9bda-2dbdf7783a1a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31741b845d2fce2b00b305ca03702a6677675000c7211b0422caef74ca6202fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.646247 kubelet[2775]: E0124 00:29:32.646217 2775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31741b845d2fce2b00b305ca03702a6677675000c7211b0422caef74ca6202fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:29:32.646351 kubelet[2775]: E0124 00:29:32.646256 2775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31741b845d2fce2b00b305ca03702a6677675000c7211b0422caef74ca6202fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-b2qbk" Jan 24 00:29:32.646351 kubelet[2775]: E0124 00:29:32.646274 2775 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31741b845d2fce2b00b305ca03702a6677675000c7211b0422caef74ca6202fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-b2qbk" Jan 24 00:29:32.646351 kubelet[2775]: E0124 00:29:32.646303 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-b2qbk_calico-system(fe31be8c-c3f6-4683-9bda-2dbdf7783a1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-b2qbk_calico-system(fe31be8c-c3f6-4683-9bda-2dbdf7783a1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31741b845d2fce2b00b305ca03702a6677675000c7211b0422caef74ca6202fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:29:35.852964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522192449.mount: Deactivated successfully. Jan 24 00:29:35.874472 containerd[1583]: time="2026-01-24T00:29:35.874370352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:35.875325 containerd[1583]: time="2026-01-24T00:29:35.875289251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 24 00:29:35.875951 containerd[1583]: time="2026-01-24T00:29:35.875903601Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:35.877374 containerd[1583]: time="2026-01-24T00:29:35.877319560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:35.878136 containerd[1583]: time="2026-01-24T00:29:35.877845600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.437375361s" Jan 24 00:29:35.878136 containerd[1583]: time="2026-01-24T00:29:35.877888350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:29:35.898144 containerd[1583]: time="2026-01-24T00:29:35.897905210Z" level=info msg="CreateContainer within sandbox \"cbaa34f48e8ee4ed513f65f33e5e3631c3249ead7ae5fd57d596aa9bf76aaacf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:29:35.916897 containerd[1583]: time="2026-01-24T00:29:35.916845110Z" level=info msg="Container fb274a7f2fd2e87d00f48749cbfd24087f12f4a0d8d53e8c755fbdaa599c08f2: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:29:35.924266 containerd[1583]: time="2026-01-24T00:29:35.924228547Z" level=info msg="CreateContainer within sandbox \"cbaa34f48e8ee4ed513f65f33e5e3631c3249ead7ae5fd57d596aa9bf76aaacf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fb274a7f2fd2e87d00f48749cbfd24087f12f4a0d8d53e8c755fbdaa599c08f2\"" Jan 24 00:29:35.924757 containerd[1583]: time="2026-01-24T00:29:35.924732686Z" level=info msg="StartContainer for \"fb274a7f2fd2e87d00f48749cbfd24087f12f4a0d8d53e8c755fbdaa599c08f2\"" Jan 24 00:29:35.927125 containerd[1583]: time="2026-01-24T00:29:35.927055945Z" level=info msg="connecting to shim fb274a7f2fd2e87d00f48749cbfd24087f12f4a0d8d53e8c755fbdaa599c08f2" address="unix:///run/containerd/s/6a32d035b50fde913c7b542e4e3f46bbba69fab302c4e9b7715a2a62f6370818" protocol=ttrpc version=3 Jan 24 00:29:35.987821 systemd[1]: Started cri-containerd-fb274a7f2fd2e87d00f48749cbfd24087f12f4a0d8d53e8c755fbdaa599c08f2.scope - libcontainer container fb274a7f2fd2e87d00f48749cbfd24087f12f4a0d8d53e8c755fbdaa599c08f2. Jan 24 00:29:36.056000 audit: BPF prog-id=189 op=LOAD Jan 24 00:29:36.056000 audit[3784]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3330 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:36.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323734613766326664326538376430306634383734396362666432 Jan 24 00:29:36.057000 audit: BPF prog-id=190 op=LOAD Jan 24 00:29:36.057000 audit[3784]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3330 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:36.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323734613766326664326538376430306634383734396362666432 Jan 24 00:29:36.057000 audit: BPF prog-id=190 op=UNLOAD Jan 24 00:29:36.057000 audit[3784]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:36.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323734613766326664326538376430306634383734396362666432 Jan 24 00:29:36.057000 audit: BPF prog-id=189 op=UNLOAD Jan 24 00:29:36.057000 audit[3784]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3330 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:36.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323734613766326664326538376430306634383734396362666432 Jan 24 00:29:36.057000 audit: BPF prog-id=191 op=LOAD Jan 24 00:29:36.057000 audit[3784]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3330 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:36.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323734613766326664326538376430306634383734396362666432 Jan 24 00:29:36.081560 containerd[1583]: time="2026-01-24T00:29:36.081518558Z" level=info msg="StartContainer for \"fb274a7f2fd2e87d00f48749cbfd24087f12f4a0d8d53e8c755fbdaa599c08f2\" returns successfully" Jan 24 00:29:36.165438 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:29:36.165568 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:29:36.345995 kubelet[2775]: I0124 00:29:36.345180 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54bcd514-60de-4dea-9d5b-89d698a7baf9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "54bcd514-60de-4dea-9d5b-89d698a7baf9" (UID: "54bcd514-60de-4dea-9d5b-89d698a7baf9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:29:36.345995 kubelet[2775]: I0124 00:29:36.345522 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54bcd514-60de-4dea-9d5b-89d698a7baf9-whisker-ca-bundle\") pod \"54bcd514-60de-4dea-9d5b-89d698a7baf9\" (UID: \"54bcd514-60de-4dea-9d5b-89d698a7baf9\") " Jan 24 00:29:36.345995 kubelet[2775]: I0124 00:29:36.345562 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-248dp\" (UniqueName: \"kubernetes.io/projected/54bcd514-60de-4dea-9d5b-89d698a7baf9-kube-api-access-248dp\") pod \"54bcd514-60de-4dea-9d5b-89d698a7baf9\" (UID: \"54bcd514-60de-4dea-9d5b-89d698a7baf9\") " Jan 24 00:29:36.345995 kubelet[2775]: I0124 00:29:36.345579 2775 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54bcd514-60de-4dea-9d5b-89d698a7baf9-whisker-backend-key-pair\") pod \"54bcd514-60de-4dea-9d5b-89d698a7baf9\" (UID: \"54bcd514-60de-4dea-9d5b-89d698a7baf9\") " Jan 24 00:29:36.353486 kubelet[2775]: I0124 00:29:36.353382 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54bcd514-60de-4dea-9d5b-89d698a7baf9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "54bcd514-60de-4dea-9d5b-89d698a7baf9" (UID: "54bcd514-60de-4dea-9d5b-89d698a7baf9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:29:36.354411 kubelet[2775]: I0124 00:29:36.354247 2775 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54bcd514-60de-4dea-9d5b-89d698a7baf9-kube-api-access-248dp" (OuterVolumeSpecName: "kube-api-access-248dp") pod "54bcd514-60de-4dea-9d5b-89d698a7baf9" (UID: "54bcd514-60de-4dea-9d5b-89d698a7baf9"). InnerVolumeSpecName "kube-api-access-248dp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:29:36.446028 kubelet[2775]: I0124 00:29:36.445971 2775 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-248dp\" (UniqueName: \"kubernetes.io/projected/54bcd514-60de-4dea-9d5b-89d698a7baf9-kube-api-access-248dp\") on node \"172-234-200-140\" DevicePath \"\"" Jan 24 00:29:36.446028 kubelet[2775]: I0124 00:29:36.445993 2775 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54bcd514-60de-4dea-9d5b-89d698a7baf9-whisker-backend-key-pair\") on node \"172-234-200-140\" DevicePath \"\"" Jan 24 00:29:36.446444 kubelet[2775]: I0124 00:29:36.446123 2775 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54bcd514-60de-4dea-9d5b-89d698a7baf9-whisker-ca-bundle\") on node \"172-234-200-140\" DevicePath \"\"" Jan 24 00:29:36.448999 kubelet[2775]: E0124 00:29:36.448828 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:36.458579 systemd[1]: Removed slice kubepods-besteffort-pod54bcd514_60de_4dea_9d5b_89d698a7baf9.slice - libcontainer container kubepods-besteffort-pod54bcd514_60de_4dea_9d5b_89d698a7baf9.slice. Jan 24 00:29:36.473419 kubelet[2775]: I0124 00:29:36.473373 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4p7jw" podStartSLOduration=1.794776933 podStartE2EDuration="10.473358742s" podCreationTimestamp="2026-01-24 00:29:26 +0000 UTC" firstStartedPulling="2026-01-24 00:29:27.20023638 +0000 UTC m=+20.963548567" lastFinishedPulling="2026-01-24 00:29:35.878818189 +0000 UTC m=+29.642130376" observedRunningTime="2026-01-24 00:29:36.466777035 +0000 UTC m=+30.230089242" watchObservedRunningTime="2026-01-24 00:29:36.473358742 +0000 UTC m=+30.236670929" Jan 24 00:29:36.518536 systemd[1]: Created slice kubepods-besteffort-pod3a284cf5_2a50_4ae3_ac95_941e18d4d850.slice - libcontainer container kubepods-besteffort-pod3a284cf5_2a50_4ae3_ac95_941e18d4d850.slice. Jan 24 00:29:36.548722 kubelet[2775]: I0124 00:29:36.546390 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3a284cf5-2a50-4ae3-ac95-941e18d4d850-whisker-backend-key-pair\") pod \"whisker-df9447cd6-v4qqr\" (UID: \"3a284cf5-2a50-4ae3-ac95-941e18d4d850\") " pod="calico-system/whisker-df9447cd6-v4qqr" Jan 24 00:29:36.548722 kubelet[2775]: I0124 00:29:36.546431 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a284cf5-2a50-4ae3-ac95-941e18d4d850-whisker-ca-bundle\") pod \"whisker-df9447cd6-v4qqr\" (UID: \"3a284cf5-2a50-4ae3-ac95-941e18d4d850\") " pod="calico-system/whisker-df9447cd6-v4qqr" Jan 24 00:29:36.548722 kubelet[2775]: I0124 00:29:36.546450 2775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq9c9\" (UniqueName: \"kubernetes.io/projected/3a284cf5-2a50-4ae3-ac95-941e18d4d850-kube-api-access-xq9c9\") pod \"whisker-df9447cd6-v4qqr\" (UID: \"3a284cf5-2a50-4ae3-ac95-941e18d4d850\") " pod="calico-system/whisker-df9447cd6-v4qqr" Jan 24 00:29:36.826278 containerd[1583]: time="2026-01-24T00:29:36.826237236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-df9447cd6-v4qqr,Uid:3a284cf5-2a50-4ae3-ac95-941e18d4d850,Namespace:calico-system,Attempt:0,}" Jan 24 00:29:36.855771 systemd[1]: var-lib-kubelet-pods-54bcd514\x2d60de\x2d4dea\x2d9d5b\x2d89d698a7baf9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d248dp.mount: Deactivated successfully. Jan 24 00:29:36.856210 systemd[1]: var-lib-kubelet-pods-54bcd514\x2d60de\x2d4dea\x2d9d5b\x2d89d698a7baf9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:29:36.962294 systemd-networkd[1487]: cali5a1ab924e9f: Link UP Jan 24 00:29:36.963628 systemd-networkd[1487]: cali5a1ab924e9f: Gained carrier Jan 24 00:29:36.985639 containerd[1583]: 2026-01-24 00:29:36.849 [INFO][3851] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:29:36.985639 containerd[1583]: 2026-01-24 00:29:36.888 [INFO][3851] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0 whisker-df9447cd6- calico-system 3a284cf5-2a50-4ae3-ac95-941e18d4d850 877 0 2026-01-24 00:29:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:df9447cd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-234-200-140 whisker-df9447cd6-v4qqr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5a1ab924e9f [] [] }} ContainerID="b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" Namespace="calico-system" Pod="whisker-df9447cd6-v4qqr" WorkloadEndpoint="172--234--200--140-k8s-whisker--df9447cd6--v4qqr-" Jan 24 00:29:36.985639 containerd[1583]: 2026-01-24 00:29:36.888 [INFO][3851] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" Namespace="calico-system" Pod="whisker-df9447cd6-v4qqr" WorkloadEndpoint="172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0" Jan 24 00:29:36.985639 containerd[1583]: 2026-01-24 00:29:36.919 [INFO][3863] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" HandleID="k8s-pod-network.b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" Workload="172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0" Jan 24 00:29:36.987149 containerd[1583]: 2026-01-24 00:29:36.919 [INFO][3863] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" HandleID="k8s-pod-network.b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" Workload="172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-200-140", "pod":"whisker-df9447cd6-v4qqr", "timestamp":"2026-01-24 00:29:36.919275979 +0000 UTC"}, Hostname:"172-234-200-140", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:29:36.987149 containerd[1583]: 2026-01-24 00:29:36.919 [INFO][3863] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:29:36.987149 containerd[1583]: 2026-01-24 00:29:36.919 [INFO][3863] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:29:36.987149 containerd[1583]: 2026-01-24 00:29:36.919 [INFO][3863] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-140' Jan 24 00:29:36.987149 containerd[1583]: 2026-01-24 00:29:36.925 [INFO][3863] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" host="172-234-200-140" Jan 24 00:29:36.987149 containerd[1583]: 2026-01-24 00:29:36.929 [INFO][3863] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-140" Jan 24 00:29:36.987149 containerd[1583]: 2026-01-24 00:29:36.933 [INFO][3863] ipam/ipam.go 511: Trying affinity for 192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:36.987149 containerd[1583]: 2026-01-24 00:29:36.935 [INFO][3863] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:36.987149 containerd[1583]: 2026-01-24 00:29:36.936 [INFO][3863] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:36.987149 containerd[1583]: 2026-01-24 00:29:36.937 [INFO][3863] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" host="172-234-200-140" Jan 24 00:29:36.987395 containerd[1583]: 2026-01-24 00:29:36.938 [INFO][3863] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970 Jan 24 00:29:36.987395 containerd[1583]: 2026-01-24 00:29:36.941 [INFO][3863] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" host="172-234-200-140" Jan 24 00:29:36.987395 containerd[1583]: 2026-01-24 00:29:36.946 [INFO][3863] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.193/26] block=192.168.94.192/26 handle="k8s-pod-network.b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" host="172-234-200-140" Jan 24 00:29:36.987395 containerd[1583]: 2026-01-24 00:29:36.946 [INFO][3863] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.193/26] handle="k8s-pod-network.b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" host="172-234-200-140" Jan 24 00:29:36.987395 containerd[1583]: 2026-01-24 00:29:36.946 [INFO][3863] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:29:36.987395 containerd[1583]: 2026-01-24 00:29:36.946 [INFO][3863] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.193/26] IPv6=[] ContainerID="b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" HandleID="k8s-pod-network.b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" Workload="172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0" Jan 24 00:29:36.987516 containerd[1583]: 2026-01-24 00:29:36.951 [INFO][3851] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" Namespace="calico-system" Pod="whisker-df9447cd6-v4qqr" WorkloadEndpoint="172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0", GenerateName:"whisker-df9447cd6-", Namespace:"calico-system", SelfLink:"", UID:"3a284cf5-2a50-4ae3-ac95-941e18d4d850", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"df9447cd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"", Pod:"whisker-df9447cd6-v4qqr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.94.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5a1ab924e9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:36.987516 containerd[1583]: 2026-01-24 00:29:36.951 [INFO][3851] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.193/32] ContainerID="b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" Namespace="calico-system" Pod="whisker-df9447cd6-v4qqr" WorkloadEndpoint="172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0" Jan 24 00:29:36.987590 containerd[1583]: 2026-01-24 00:29:36.951 [INFO][3851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a1ab924e9f ContainerID="b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" Namespace="calico-system" Pod="whisker-df9447cd6-v4qqr" WorkloadEndpoint="172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0" Jan 24 00:29:36.987590 containerd[1583]: 2026-01-24 00:29:36.964 [INFO][3851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" Namespace="calico-system" Pod="whisker-df9447cd6-v4qqr" WorkloadEndpoint="172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0" Jan 24 00:29:36.987633 containerd[1583]: 2026-01-24 00:29:36.965 [INFO][3851] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" Namespace="calico-system" Pod="whisker-df9447cd6-v4qqr" WorkloadEndpoint="172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0", GenerateName:"whisker-df9447cd6-", Namespace:"calico-system", SelfLink:"", UID:"3a284cf5-2a50-4ae3-ac95-941e18d4d850", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"df9447cd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970", Pod:"whisker-df9447cd6-v4qqr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.94.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5a1ab924e9f", MAC:"ce:fc:68:a8:72:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:36.987892 containerd[1583]: 2026-01-24 00:29:36.979 [INFO][3851] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" Namespace="calico-system" Pod="whisker-df9447cd6-v4qqr" WorkloadEndpoint="172--234--200--140-k8s-whisker--df9447cd6--v4qqr-eth0" Jan 24 00:29:37.023775 containerd[1583]: time="2026-01-24T00:29:37.023702897Z" level=info msg="connecting to shim b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970" address="unix:///run/containerd/s/36bea97c07773eb45869ab43b2637f7fd884f354b0d94d55361570a1814d5856" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:37.052791 systemd[1]: Started cri-containerd-b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970.scope - libcontainer container b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970. Jan 24 00:29:37.062000 audit: BPF prog-id=192 op=LOAD Jan 24 00:29:37.064870 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 24 00:29:37.064945 kernel: audit: type=1334 audit(1769214577.062:601): prog-id=192 op=LOAD Jan 24 00:29:37.066000 audit: BPF prog-id=193 op=LOAD Jan 24 00:29:37.073683 kernel: audit: type=1334 audit(1769214577.066:602): prog-id=193 op=LOAD Jan 24 00:29:37.073746 kernel: audit: type=1300 audit(1769214577.066:602): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:37.066000 audit[3895]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:37.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234306536363136643633346233616532323332396230313334666466 Jan 24 00:29:37.085941 kernel: audit: type=1327 audit(1769214577.066:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234306536363136643633346233616532323332396230313334666466 Jan 24 00:29:37.088089 kernel: audit: type=1334 audit(1769214577.066:603): prog-id=193 op=UNLOAD Jan 24 00:29:37.066000 audit: BPF prog-id=193 op=UNLOAD Jan 24 00:29:37.095452 kernel: audit: type=1300 audit(1769214577.066:603): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:37.066000 audit[3895]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:37.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234306536363136643633346233616532323332396230313334666466 Jan 24 00:29:37.106172 kernel: audit: type=1327 audit(1769214577.066:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234306536363136643633346233616532323332396230313334666466 Jan 24 00:29:37.106225 kernel: audit: type=1334 audit(1769214577.066:604): prog-id=194 op=LOAD Jan 24 00:29:37.066000 audit: BPF prog-id=194 op=LOAD Jan 24 00:29:37.112252 kernel: audit: type=1300 audit(1769214577.066:604): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:37.066000 audit[3895]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:37.122117 kernel: audit: type=1327 audit(1769214577.066:604): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234306536363136643633346233616532323332396230313334666466 Jan 24 00:29:37.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234306536363136643633346233616532323332396230313334666466 Jan 24 00:29:37.066000 audit: BPF prog-id=195 op=LOAD Jan 24 00:29:37.066000 audit[3895]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:37.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234306536363136643633346233616532323332396230313334666466 Jan 24 00:29:37.066000 audit: BPF prog-id=195 op=UNLOAD Jan 24 00:29:37.066000 audit[3895]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:37.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234306536363136643633346233616532323332396230313334666466 Jan 24 00:29:37.066000 audit: BPF prog-id=194 op=UNLOAD Jan 24 00:29:37.066000 audit[3895]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:37.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234306536363136643633346233616532323332396230313334666466 Jan 24 00:29:37.066000 audit: BPF prog-id=196 op=LOAD Jan 24 00:29:37.066000 audit[3895]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:37.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234306536363136643633346233616532323332396230313334666466 Jan 24 00:29:37.130369 containerd[1583]: time="2026-01-24T00:29:37.130328923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-df9447cd6-v4qqr,Uid:3a284cf5-2a50-4ae3-ac95-941e18d4d850,Namespace:calico-system,Attempt:0,} returns sandbox id \"b40e6616d634b3ae22329b0134fdf53ee431a64893c4b15927c6c45b6dca8970\"" Jan 24 00:29:37.132821 containerd[1583]: time="2026-01-24T00:29:37.132797352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:29:37.261427 containerd[1583]: time="2026-01-24T00:29:37.261205588Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:29:37.262953 containerd[1583]: time="2026-01-24T00:29:37.262920267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:29:37.263914 containerd[1583]: time="2026-01-24T00:29:37.263891417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:37.264328 kubelet[2775]: E0124 00:29:37.264272 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:29:37.264487 kubelet[2775]: E0124 00:29:37.264462 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:29:37.265109 kubelet[2775]: E0124 00:29:37.265068 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:afba9b21ebd04ac3a3f6ee43ebc36fd5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xq9c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df9447cd6-v4qqr_calico-system(3a284cf5-2a50-4ae3-ac95-941e18d4d850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:29:37.272018 containerd[1583]: time="2026-01-24T00:29:37.271939033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:29:37.405945 containerd[1583]: time="2026-01-24T00:29:37.405724566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:29:37.406792 containerd[1583]: time="2026-01-24T00:29:37.406739355Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:29:37.406858 containerd[1583]: time="2026-01-24T00:29:37.406843195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:37.407027 kubelet[2775]: E0124 00:29:37.406988 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:29:37.407721 kubelet[2775]: E0124 00:29:37.407040 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:29:37.407757 kubelet[2775]: E0124 00:29:37.407178 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xq9c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df9447cd6-v4qqr_calico-system(3a284cf5-2a50-4ae3-ac95-941e18d4d850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:29:37.409330 kubelet[2775]: E0124 00:29:37.409238 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:29:37.452266 kubelet[2775]: I0124 00:29:37.451082 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:29:37.452266 kubelet[2775]: E0124 00:29:37.451413 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:37.456150 kubelet[2775]: E0124 00:29:37.454738 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:29:37.482000 audit[3946]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:37.482000 audit[3946]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdd2136be0 a2=0 a3=7ffdd2136bcc items=0 ppid=2879 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:37.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:37.489000 audit[3946]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:37.489000 audit[3946]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd2136be0 a2=0 a3=0 items=0 ppid=2879 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:37.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:38.327081 kubelet[2775]: I0124 00:29:38.327027 2775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54bcd514-60de-4dea-9d5b-89d698a7baf9" path="/var/lib/kubelet/pods/54bcd514-60de-4dea-9d5b-89d698a7baf9/volumes" Jan 24 00:29:38.455181 kubelet[2775]: E0124 00:29:38.455110 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:29:38.644829 systemd-networkd[1487]: cali5a1ab924e9f: Gained IPv6LL Jan 24 00:29:42.079692 kubelet[2775]: I0124 00:29:42.079467 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:29:42.080798 kubelet[2775]: E0124 00:29:42.080391 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:42.120965 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 24 00:29:42.121074 kernel: audit: type=1325 audit(1769214582.112:611): table=filter:119 family=2 entries=21 op=nft_register_rule pid=4086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:42.112000 audit[4086]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:42.112000 audit[4086]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffd0a8de80 a2=0 a3=7fffd0a8de6c items=0 ppid=2879 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.134732 kernel: audit: type=1300 audit(1769214582.112:611): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffd0a8de80 a2=0 a3=7fffd0a8de6c items=0 ppid=2879 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.112000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:42.145345 kernel: audit: type=1327 audit(1769214582.112:611): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:42.145411 kernel: audit: type=1325 audit(1769214582.133:612): table=nat:120 family=2 entries=19 op=nft_register_chain pid=4086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:42.133000 audit[4086]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:42.154550 kernel: audit: type=1300 audit(1769214582.133:612): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffd0a8de80 a2=0 a3=7fffd0a8de6c items=0 ppid=2879 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.133000 audit[4086]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffd0a8de80 a2=0 a3=7fffd0a8de6c items=0 ppid=2879 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:42.159752 kernel: audit: type=1327 audit(1769214582.133:612): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:42.460302 kubelet[2775]: E0124 00:29:42.460195 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:42.894000 audit: BPF prog-id=197 op=LOAD Jan 24 00:29:42.898706 kernel: audit: type=1334 audit(1769214582.894:613): prog-id=197 op=LOAD Jan 24 00:29:42.894000 audit[4141]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe3206320 a2=98 a3=1fffffffffffffff items=0 ppid=4125 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.909866 kernel: audit: type=1300 audit(1769214582.894:613): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe3206320 a2=98 a3=1fffffffffffffff items=0 ppid=4125 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.894000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:29:42.919327 kernel: audit: type=1327 audit(1769214582.894:613): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:29:42.919374 kernel: audit: type=1334 audit(1769214582.894:614): prog-id=197 op=UNLOAD Jan 24 00:29:42.894000 audit: BPF prog-id=197 op=UNLOAD Jan 24 00:29:42.894000 audit[4141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffe32062f0 a3=0 items=0 ppid=4125 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.894000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:29:42.894000 audit: BPF prog-id=198 op=LOAD Jan 24 00:29:42.894000 audit[4141]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe3206200 a2=94 a3=3 items=0 ppid=4125 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.894000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:29:42.894000 audit: BPF prog-id=198 op=UNLOAD Jan 24 00:29:42.894000 audit[4141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffe3206200 a2=94 a3=3 items=0 ppid=4125 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.894000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:29:42.894000 audit: BPF prog-id=199 op=LOAD Jan 24 00:29:42.894000 audit[4141]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe3206240 a2=94 a3=7fffe3206420 items=0 ppid=4125 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.894000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:29:42.894000 audit: BPF prog-id=199 op=UNLOAD Jan 24 00:29:42.894000 audit[4141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffe3206240 a2=94 a3=7fffe3206420 items=0 ppid=4125 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.894000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 24 00:29:42.897000 audit: BPF prog-id=200 op=LOAD Jan 24 00:29:42.897000 audit[4142]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5ce224b0 a2=98 a3=3 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:42.897000 audit: BPF prog-id=200 op=UNLOAD Jan 24 00:29:42.897000 audit[4142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff5ce22480 a3=0 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:42.897000 audit: BPF prog-id=201 op=LOAD Jan 24 00:29:42.897000 audit[4142]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff5ce222a0 a2=94 a3=54428f items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:42.897000 audit: BPF prog-id=201 op=UNLOAD Jan 24 00:29:42.897000 audit[4142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff5ce222a0 a2=94 a3=54428f items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:42.897000 audit: BPF prog-id=202 op=LOAD Jan 24 00:29:42.897000 audit[4142]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff5ce222d0 a2=94 a3=2 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:42.897000 audit: BPF prog-id=202 op=UNLOAD Jan 24 00:29:42.897000 audit[4142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff5ce222d0 a2=0 a3=2 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:42.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.085000 audit: BPF prog-id=203 op=LOAD Jan 24 00:29:43.085000 audit[4142]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff5ce22190 a2=94 a3=1 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.085000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.085000 audit: BPF prog-id=203 op=UNLOAD Jan 24 00:29:43.085000 audit[4142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff5ce22190 a2=94 a3=1 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.085000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.094000 audit: BPF prog-id=204 op=LOAD Jan 24 00:29:43.094000 audit[4142]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff5ce22180 a2=94 a3=4 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.094000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.094000 audit: BPF prog-id=204 op=UNLOAD Jan 24 00:29:43.094000 audit[4142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff5ce22180 a2=0 a3=4 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.094000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.094000 audit: BPF prog-id=205 op=LOAD Jan 24 00:29:43.094000 audit[4142]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff5ce21fe0 a2=94 a3=5 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.094000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.094000 audit: BPF prog-id=205 op=UNLOAD Jan 24 00:29:43.094000 audit[4142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff5ce21fe0 a2=0 a3=5 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.094000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.094000 audit: BPF prog-id=206 op=LOAD Jan 24 00:29:43.094000 audit[4142]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff5ce22200 a2=94 a3=6 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.094000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.094000 audit: BPF prog-id=206 op=UNLOAD Jan 24 00:29:43.094000 audit[4142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff5ce22200 a2=0 a3=6 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.094000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.095000 audit: BPF prog-id=207 op=LOAD Jan 24 00:29:43.095000 audit[4142]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff5ce219b0 a2=94 a3=88 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.095000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.095000 audit: BPF prog-id=208 op=LOAD Jan 24 00:29:43.095000 audit[4142]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff5ce21830 a2=94 a3=2 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.095000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.096000 audit: BPF prog-id=208 op=UNLOAD Jan 24 00:29:43.096000 audit[4142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff5ce21860 a2=0 a3=7fff5ce21960 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.096000 audit: BPF prog-id=207 op=UNLOAD Jan 24 00:29:43.096000 audit[4142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=36f5fd10 a2=0 a3=42e6e3bcef3ef2c1 items=0 ppid=4125 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 24 00:29:43.106000 audit: BPF prog-id=209 op=LOAD Jan 24 00:29:43.106000 audit[4145]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb25ad460 a2=98 a3=1999999999999999 items=0 ppid=4125 pid=4145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.106000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:29:43.106000 audit: BPF prog-id=209 op=UNLOAD Jan 24 00:29:43.106000 audit[4145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdb25ad430 a3=0 items=0 ppid=4125 pid=4145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.106000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:29:43.106000 audit: BPF prog-id=210 op=LOAD Jan 24 00:29:43.106000 audit[4145]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb25ad340 a2=94 a3=ffff items=0 ppid=4125 pid=4145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.106000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:29:43.106000 audit: BPF prog-id=210 op=UNLOAD Jan 24 00:29:43.106000 audit[4145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdb25ad340 a2=94 a3=ffff items=0 ppid=4125 pid=4145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.106000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:29:43.106000 audit: BPF prog-id=211 op=LOAD Jan 24 00:29:43.106000 audit[4145]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb25ad380 a2=94 a3=7ffdb25ad560 items=0 ppid=4125 pid=4145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.106000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:29:43.106000 audit: BPF prog-id=211 op=UNLOAD Jan 24 00:29:43.106000 audit[4145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdb25ad380 a2=94 a3=7ffdb25ad560 items=0 ppid=4125 pid=4145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.106000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 24 00:29:43.169722 systemd-networkd[1487]: vxlan.calico: Link UP Jan 24 00:29:43.169730 systemd-networkd[1487]: vxlan.calico: Gained carrier Jan 24 00:29:43.195000 audit: BPF prog-id=212 op=LOAD Jan 24 00:29:43.195000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9b0f3aa0 a2=98 a3=0 items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.195000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.196000 audit: BPF prog-id=212 op=UNLOAD Jan 24 00:29:43.196000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff9b0f3a70 a3=0 items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.196000 audit: BPF prog-id=213 op=LOAD Jan 24 00:29:43.196000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9b0f38b0 a2=94 a3=54428f items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.196000 audit: BPF prog-id=213 op=UNLOAD Jan 24 00:29:43.196000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff9b0f38b0 a2=94 a3=54428f items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.196000 audit: BPF prog-id=214 op=LOAD Jan 24 00:29:43.196000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9b0f38e0 a2=94 a3=2 items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.196000 audit: BPF prog-id=214 op=UNLOAD Jan 24 00:29:43.196000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff9b0f38e0 a2=0 a3=2 items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.196000 audit: BPF prog-id=215 op=LOAD Jan 24 00:29:43.196000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9b0f3690 a2=94 a3=4 items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.196000 audit: BPF prog-id=215 op=UNLOAD Jan 24 00:29:43.196000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff9b0f3690 a2=94 a3=4 items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.196000 audit: BPF prog-id=216 op=LOAD Jan 24 00:29:43.196000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9b0f3790 a2=94 a3=7fff9b0f3910 items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.196000 audit: BPF prog-id=216 op=UNLOAD Jan 24 00:29:43.196000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff9b0f3790 a2=0 a3=7fff9b0f3910 items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.196000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.197000 audit: BPF prog-id=217 op=LOAD Jan 24 00:29:43.197000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9b0f2ec0 a2=94 a3=2 items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.197000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.197000 audit: BPF prog-id=217 op=UNLOAD Jan 24 00:29:43.197000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff9b0f2ec0 a2=0 a3=2 items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.197000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.197000 audit: BPF prog-id=218 op=LOAD Jan 24 00:29:43.197000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9b0f2fc0 a2=94 a3=30 items=0 ppid=4125 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.197000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 24 00:29:43.205000 audit: BPF prog-id=219 op=LOAD Jan 24 00:29:43.205000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffadd79290 a2=98 a3=0 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.205000 audit: BPF prog-id=219 op=UNLOAD Jan 24 00:29:43.205000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffadd79260 a3=0 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.205000 audit: BPF prog-id=220 op=LOAD Jan 24 00:29:43.205000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffadd79080 a2=94 a3=54428f items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.205000 audit: BPF prog-id=220 op=UNLOAD Jan 24 00:29:43.205000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffadd79080 a2=94 a3=54428f items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.205000 audit: BPF prog-id=221 op=LOAD Jan 24 00:29:43.205000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffadd790b0 a2=94 a3=2 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.205000 audit: BPF prog-id=221 op=UNLOAD Jan 24 00:29:43.205000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffadd790b0 a2=0 a3=2 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.325550 kubelet[2775]: E0124 00:29:43.324095 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:43.327086 containerd[1583]: time="2026-01-24T00:29:43.326640335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ds6l9,Uid:f51ff3e1-9f79-414a-8962-e8fa83c40e67,Namespace:kube-system,Attempt:0,}" Jan 24 00:29:43.393000 audit: BPF prog-id=222 op=LOAD Jan 24 00:29:43.393000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffadd78f70 a2=94 a3=1 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.393000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.394000 audit: BPF prog-id=222 op=UNLOAD Jan 24 00:29:43.394000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffadd78f70 a2=94 a3=1 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.394000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.406000 audit: BPF prog-id=223 op=LOAD Jan 24 00:29:43.406000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffadd78f60 a2=94 a3=4 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.406000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.406000 audit: BPF prog-id=223 op=UNLOAD Jan 24 00:29:43.406000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffadd78f60 a2=0 a3=4 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.406000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.406000 audit: BPF prog-id=224 op=LOAD Jan 24 00:29:43.406000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffadd78dc0 a2=94 a3=5 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.406000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.406000 audit: BPF prog-id=224 op=UNLOAD Jan 24 00:29:43.406000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffadd78dc0 a2=0 a3=5 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.406000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.407000 audit: BPF prog-id=225 op=LOAD Jan 24 00:29:43.407000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffadd78fe0 a2=94 a3=6 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.407000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.407000 audit: BPF prog-id=225 op=UNLOAD Jan 24 00:29:43.407000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffadd78fe0 a2=0 a3=6 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.407000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.407000 audit: BPF prog-id=226 op=LOAD Jan 24 00:29:43.407000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffadd78790 a2=94 a3=88 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.407000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.407000 audit: BPF prog-id=227 op=LOAD Jan 24 00:29:43.407000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffadd78610 a2=94 a3=2 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.407000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.407000 audit: BPF prog-id=227 op=UNLOAD Jan 24 00:29:43.407000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffadd78640 a2=0 a3=7fffadd78740 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.407000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.408000 audit: BPF prog-id=226 op=UNLOAD Jan 24 00:29:43.408000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=21770d10 a2=0 a3=c9930a35813caf35 items=0 ppid=4125 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 24 00:29:43.415000 audit: BPF prog-id=218 op=UNLOAD Jan 24 00:29:43.415000 audit[4125]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0007f0ac0 a2=0 a3=0 items=0 ppid=3925 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.415000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 24 00:29:43.440805 systemd-networkd[1487]: cali6b00ec0aa87: Link UP Jan 24 00:29:43.445005 systemd-networkd[1487]: cali6b00ec0aa87: Gained carrier Jan 24 00:29:43.470639 containerd[1583]: 2026-01-24 00:29:43.374 [INFO][4180] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0 coredns-668d6bf9bc- kube-system f51ff3e1-9f79-414a-8962-e8fa83c40e67 805 0 2026-01-24 00:29:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-200-140 coredns-668d6bf9bc-ds6l9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b00ec0aa87 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds6l9" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-" Jan 24 00:29:43.470639 containerd[1583]: 2026-01-24 00:29:43.374 [INFO][4180] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds6l9" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0" Jan 24 00:29:43.470639 containerd[1583]: 2026-01-24 00:29:43.400 [INFO][4192] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" HandleID="k8s-pod-network.613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" Workload="172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0" Jan 24 00:29:43.471520 containerd[1583]: 2026-01-24 00:29:43.400 [INFO][4192] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" HandleID="k8s-pod-network.613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" Workload="172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047efa0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-200-140", "pod":"coredns-668d6bf9bc-ds6l9", "timestamp":"2026-01-24 00:29:43.400286428 +0000 UTC"}, Hostname:"172-234-200-140", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:29:43.471520 containerd[1583]: 2026-01-24 00:29:43.400 [INFO][4192] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:29:43.471520 containerd[1583]: 2026-01-24 00:29:43.400 [INFO][4192] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:29:43.471520 containerd[1583]: 2026-01-24 00:29:43.400 [INFO][4192] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-140' Jan 24 00:29:43.471520 containerd[1583]: 2026-01-24 00:29:43.405 [INFO][4192] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" host="172-234-200-140" Jan 24 00:29:43.471520 containerd[1583]: 2026-01-24 00:29:43.411 [INFO][4192] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-140" Jan 24 00:29:43.471520 containerd[1583]: 2026-01-24 00:29:43.415 [INFO][4192] ipam/ipam.go 511: Trying affinity for 192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:43.471520 containerd[1583]: 2026-01-24 00:29:43.418 [INFO][4192] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:43.471520 containerd[1583]: 2026-01-24 00:29:43.421 [INFO][4192] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:43.471520 containerd[1583]: 2026-01-24 00:29:43.424 [INFO][4192] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" host="172-234-200-140" Jan 24 00:29:43.471825 containerd[1583]: 2026-01-24 00:29:43.425 [INFO][4192] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae Jan 24 00:29:43.471825 containerd[1583]: 2026-01-24 00:29:43.428 [INFO][4192] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" host="172-234-200-140" Jan 24 00:29:43.471825 containerd[1583]: 2026-01-24 00:29:43.433 [INFO][4192] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.194/26] block=192.168.94.192/26 handle="k8s-pod-network.613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" host="172-234-200-140" Jan 24 00:29:43.471825 containerd[1583]: 2026-01-24 00:29:43.433 [INFO][4192] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.194/26] handle="k8s-pod-network.613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" host="172-234-200-140" Jan 24 00:29:43.471825 containerd[1583]: 2026-01-24 00:29:43.433 [INFO][4192] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:29:43.471825 containerd[1583]: 2026-01-24 00:29:43.433 [INFO][4192] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.194/26] IPv6=[] ContainerID="613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" HandleID="k8s-pod-network.613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" Workload="172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0" Jan 24 00:29:43.471962 containerd[1583]: 2026-01-24 00:29:43.436 [INFO][4180] cni-plugin/k8s.go 418: Populated endpoint ContainerID="613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds6l9" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f51ff3e1-9f79-414a-8962-e8fa83c40e67", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"", Pod:"coredns-668d6bf9bc-ds6l9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b00ec0aa87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:43.471962 containerd[1583]: 2026-01-24 00:29:43.436 [INFO][4180] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.194/32] ContainerID="613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds6l9" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0" Jan 24 00:29:43.471962 containerd[1583]: 2026-01-24 00:29:43.437 [INFO][4180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b00ec0aa87 ContainerID="613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds6l9" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0" Jan 24 00:29:43.471962 containerd[1583]: 2026-01-24 00:29:43.445 [INFO][4180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds6l9" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0" Jan 24 00:29:43.471962 containerd[1583]: 2026-01-24 00:29:43.451 [INFO][4180] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds6l9" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f51ff3e1-9f79-414a-8962-e8fa83c40e67", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae", Pod:"coredns-668d6bf9bc-ds6l9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b00ec0aa87", MAC:"6e:17:5d:a1:5a:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:43.471962 containerd[1583]: 2026-01-24 00:29:43.463 [INFO][4180] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-ds6l9" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--ds6l9-eth0" Jan 24 00:29:43.496287 containerd[1583]: time="2026-01-24T00:29:43.496241190Z" level=info msg="connecting to shim 613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae" address="unix:///run/containerd/s/8e770cd92fa8846f94716cacea4820333bc00876459723365b303a7df37a51fc" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:43.510000 audit[4239]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4239 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:29:43.510000 audit[4239]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffda5982990 a2=0 a3=7ffda598297c items=0 ppid=4125 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.510000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:29:43.531000 audit[4247]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4247 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:29:43.531000 audit[4247]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd06a5ad70 a2=0 a3=7ffd06a5ad5c items=0 ppid=4125 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.531000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:29:43.535000 audit[4234]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4234 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:29:43.535000 audit[4234]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff06764f70 a2=0 a3=7fff06764f5c items=0 ppid=4125 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.535000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:29:43.550211 systemd[1]: Started cri-containerd-613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae.scope - libcontainer container 613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae. Jan 24 00:29:43.537000 audit[4256]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4256 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:29:43.537000 audit[4256]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7fff8a6cae90 a2=0 a3=55d3cb502000 items=0 ppid=4125 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.537000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:29:43.564000 audit: BPF prog-id=228 op=LOAD Jan 24 00:29:43.565000 audit: BPF prog-id=229 op=LOAD Jan 24 00:29:43.565000 audit[4246]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4231 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631336465626438343438313939346233613731336165623437306139 Jan 24 00:29:43.565000 audit: BPF prog-id=229 op=UNLOAD Jan 24 00:29:43.565000 audit[4246]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4231 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631336465626438343438313939346233613731336165623437306139 Jan 24 00:29:43.566000 audit: BPF prog-id=230 op=LOAD Jan 24 00:29:43.566000 audit[4246]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4231 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631336465626438343438313939346233613731336165623437306139 Jan 24 00:29:43.566000 audit: BPF prog-id=231 op=LOAD Jan 24 00:29:43.566000 audit[4246]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4231 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631336465626438343438313939346233613731336165623437306139 Jan 24 00:29:43.566000 audit: BPF prog-id=231 op=UNLOAD Jan 24 00:29:43.566000 audit[4246]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4231 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631336465626438343438313939346233613731336165623437306139 Jan 24 00:29:43.566000 audit: BPF prog-id=230 op=UNLOAD Jan 24 00:29:43.566000 audit[4246]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4231 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631336465626438343438313939346233613731336165623437306139 Jan 24 00:29:43.567000 audit: BPF prog-id=232 op=LOAD Jan 24 00:29:43.567000 audit[4246]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4231 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631336465626438343438313939346233613731336165623437306139 Jan 24 00:29:43.586000 audit[4281]: NETFILTER_CFG table=filter:125 family=2 entries=42 op=nft_register_chain pid=4281 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:29:43.586000 audit[4281]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffc7e7e59e0 a2=0 a3=7ffc7e7e59cc items=0 ppid=4125 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.586000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:29:43.611908 containerd[1583]: time="2026-01-24T00:29:43.611833482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ds6l9,Uid:f51ff3e1-9f79-414a-8962-e8fa83c40e67,Namespace:kube-system,Attempt:0,} returns sandbox id \"613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae\"" Jan 24 00:29:43.612845 kubelet[2775]: E0124 00:29:43.612813 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:43.615596 containerd[1583]: time="2026-01-24T00:29:43.615561490Z" level=info msg="CreateContainer within sandbox \"613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:29:43.625804 containerd[1583]: time="2026-01-24T00:29:43.625378565Z" level=info msg="Container 5978274ee1708831b02b42e6e482f1c55bd69c94faff2d3eb4d0b3c5b54a95a1: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:29:43.631174 containerd[1583]: time="2026-01-24T00:29:43.631131242Z" level=info msg="CreateContainer within sandbox \"613debd84481994b3a713aeb470a9e239f18c770f2d7e7405b848aca688e76ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5978274ee1708831b02b42e6e482f1c55bd69c94faff2d3eb4d0b3c5b54a95a1\"" Jan 24 00:29:43.632201 containerd[1583]: time="2026-01-24T00:29:43.632180472Z" level=info msg="StartContainer for \"5978274ee1708831b02b42e6e482f1c55bd69c94faff2d3eb4d0b3c5b54a95a1\"" Jan 24 00:29:43.633519 containerd[1583]: time="2026-01-24T00:29:43.633464711Z" level=info msg="connecting to shim 5978274ee1708831b02b42e6e482f1c55bd69c94faff2d3eb4d0b3c5b54a95a1" address="unix:///run/containerd/s/8e770cd92fa8846f94716cacea4820333bc00876459723365b303a7df37a51fc" protocol=ttrpc version=3 Jan 24 00:29:43.658863 systemd[1]: Started cri-containerd-5978274ee1708831b02b42e6e482f1c55bd69c94faff2d3eb4d0b3c5b54a95a1.scope - libcontainer container 5978274ee1708831b02b42e6e482f1c55bd69c94faff2d3eb4d0b3c5b54a95a1. Jan 24 00:29:43.673000 audit: BPF prog-id=233 op=LOAD Jan 24 00:29:43.673000 audit: BPF prog-id=234 op=LOAD Jan 24 00:29:43.673000 audit[4288]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4231 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539373832373465653137303838333162303262343265366534383266 Jan 24 00:29:43.674000 audit: BPF prog-id=234 op=UNLOAD Jan 24 00:29:43.674000 audit[4288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4231 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539373832373465653137303838333162303262343265366534383266 Jan 24 00:29:43.674000 audit: BPF prog-id=235 op=LOAD Jan 24 00:29:43.674000 audit[4288]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4231 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539373832373465653137303838333162303262343265366534383266 Jan 24 00:29:43.674000 audit: BPF prog-id=236 op=LOAD Jan 24 00:29:43.674000 audit[4288]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4231 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539373832373465653137303838333162303262343265366534383266 Jan 24 00:29:43.674000 audit: BPF prog-id=236 op=UNLOAD Jan 24 00:29:43.674000 audit[4288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4231 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539373832373465653137303838333162303262343265366534383266 Jan 24 00:29:43.674000 audit: BPF prog-id=235 op=UNLOAD Jan 24 00:29:43.674000 audit[4288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4231 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539373832373465653137303838333162303262343265366534383266 Jan 24 00:29:43.674000 audit: BPF prog-id=237 op=LOAD Jan 24 00:29:43.674000 audit[4288]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4231 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:43.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539373832373465653137303838333162303262343265366534383266 Jan 24 00:29:43.700626 containerd[1583]: time="2026-01-24T00:29:43.699312688Z" level=info msg="StartContainer for \"5978274ee1708831b02b42e6e482f1c55bd69c94faff2d3eb4d0b3c5b54a95a1\" returns successfully" Jan 24 00:29:44.324525 containerd[1583]: time="2026-01-24T00:29:44.324266776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547797c6d5-k28bz,Uid:1fd846d3-1f7b-431f-b783-0e9765cbef2a,Namespace:calico-system,Attempt:0,}" Jan 24 00:29:44.324854 containerd[1583]: time="2026-01-24T00:29:44.324834795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569b7b4d59-8g26m,Uid:2078a205-dd8a-4be4-86c7-d47960d16fb3,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:29:44.450495 systemd-networkd[1487]: cali6741517d56d: Link UP Jan 24 00:29:44.452344 systemd-networkd[1487]: cali6741517d56d: Gained carrier Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.381 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0 calico-apiserver-569b7b4d59- calico-apiserver 2078a205-dd8a-4be4-86c7-d47960d16fb3 808 0 2026-01-24 00:29:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:569b7b4d59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-200-140 calico-apiserver-569b7b4d59-8g26m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6741517d56d [] [] }} ContainerID="b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-8g26m" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-" Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.381 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-8g26m" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0" Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.410 [INFO][4351] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" HandleID="k8s-pod-network.b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" Workload="172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0" Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.410 [INFO][4351] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" HandleID="k8s-pod-network.b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" Workload="172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-200-140", "pod":"calico-apiserver-569b7b4d59-8g26m", "timestamp":"2026-01-24 00:29:44.410150903 +0000 UTC"}, Hostname:"172-234-200-140", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.410 [INFO][4351] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.410 [INFO][4351] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.410 [INFO][4351] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-140' Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.416 [INFO][4351] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" host="172-234-200-140" Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.420 [INFO][4351] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-140" Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.425 [INFO][4351] ipam/ipam.go 511: Trying affinity for 192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.427 [INFO][4351] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.429 [INFO][4351] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.429 [INFO][4351] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" host="172-234-200-140" Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.431 [INFO][4351] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4 Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.434 [INFO][4351] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" host="172-234-200-140" Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.440 [INFO][4351] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.195/26] block=192.168.94.192/26 handle="k8s-pod-network.b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" host="172-234-200-140" Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.440 [INFO][4351] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.195/26] handle="k8s-pod-network.b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" host="172-234-200-140" Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.440 [INFO][4351] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:29:44.470074 containerd[1583]: 2026-01-24 00:29:44.440 [INFO][4351] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.195/26] IPv6=[] ContainerID="b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" HandleID="k8s-pod-network.b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" Workload="172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0" Jan 24 00:29:44.471783 containerd[1583]: 2026-01-24 00:29:44.443 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-8g26m" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0", GenerateName:"calico-apiserver-569b7b4d59-", Namespace:"calico-apiserver", SelfLink:"", UID:"2078a205-dd8a-4be4-86c7-d47960d16fb3", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569b7b4d59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"", Pod:"calico-apiserver-569b7b4d59-8g26m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6741517d56d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:44.471783 containerd[1583]: 2026-01-24 00:29:44.444 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.195/32] ContainerID="b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-8g26m" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0" Jan 24 00:29:44.471783 containerd[1583]: 2026-01-24 00:29:44.444 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6741517d56d ContainerID="b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-8g26m" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0" Jan 24 00:29:44.471783 containerd[1583]: 2026-01-24 00:29:44.452 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-8g26m" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0" Jan 24 00:29:44.471783 containerd[1583]: 2026-01-24 00:29:44.453 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-8g26m" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0", GenerateName:"calico-apiserver-569b7b4d59-", Namespace:"calico-apiserver", SelfLink:"", UID:"2078a205-dd8a-4be4-86c7-d47960d16fb3", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569b7b4d59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4", Pod:"calico-apiserver-569b7b4d59-8g26m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6741517d56d", MAC:"92:71:cc:9c:90:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:44.471783 containerd[1583]: 2026-01-24 00:29:44.465 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-8g26m" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--8g26m-eth0" Jan 24 00:29:44.479169 kubelet[2775]: E0124 00:29:44.479152 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:44.499845 kubelet[2775]: I0124 00:29:44.499576 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ds6l9" podStartSLOduration=33.499350658 podStartE2EDuration="33.499350658s" podCreationTimestamp="2026-01-24 00:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:29:44.497100209 +0000 UTC m=+38.260412396" watchObservedRunningTime="2026-01-24 00:29:44.499350658 +0000 UTC m=+38.262662845" Jan 24 00:29:44.508682 containerd[1583]: time="2026-01-24T00:29:44.507965244Z" level=info msg="connecting to shim b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4" address="unix:///run/containerd/s/77269f063a3cf1b886919b6e8b808b7c3b8c3ff45ddc59ace914d03bd99f0337" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:44.536000 audit[4371]: NETFILTER_CFG table=filter:126 family=2 entries=60 op=nft_register_chain pid=4371 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:29:44.536000 audit[4371]: SYSCALL arch=c000003e syscall=46 success=yes exit=32248 a0=3 a1=7ffccfe35240 a2=0 a3=7ffccfe3522c items=0 ppid=4125 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.536000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:29:44.555000 audit[4404]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:44.555000 audit[4404]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd17455560 a2=0 a3=7ffd1745554c items=0 ppid=2879 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.555000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:44.559000 audit[4404]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:44.559000 audit[4404]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd17455560 a2=0 a3=0 items=0 ppid=2879 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:44.566890 systemd[1]: Started cri-containerd-b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4.scope - libcontainer container b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4. Jan 24 00:29:44.580538 systemd-networkd[1487]: cali8983060a3e6: Link UP Jan 24 00:29:44.582748 systemd-networkd[1487]: cali8983060a3e6: Gained carrier Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.375 [INFO][4322] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0 calico-kube-controllers-547797c6d5- calico-system 1fd846d3-1f7b-431f-b783-0e9765cbef2a 816 0 2026-01-24 00:29:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:547797c6d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-200-140 calico-kube-controllers-547797c6d5-k28bz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8983060a3e6 [] [] }} ContainerID="d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" Namespace="calico-system" Pod="calico-kube-controllers-547797c6d5-k28bz" WorkloadEndpoint="172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-" Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.375 [INFO][4322] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" Namespace="calico-system" Pod="calico-kube-controllers-547797c6d5-k28bz" WorkloadEndpoint="172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0" Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.410 [INFO][4346] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" HandleID="k8s-pod-network.d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" Workload="172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0" Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.410 [INFO][4346] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" HandleID="k8s-pod-network.d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" Workload="172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-200-140", "pod":"calico-kube-controllers-547797c6d5-k28bz", "timestamp":"2026-01-24 00:29:44.410444303 +0000 UTC"}, Hostname:"172-234-200-140", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.410 [INFO][4346] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.440 [INFO][4346] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.440 [INFO][4346] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-140' Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.518 [INFO][4346] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" host="172-234-200-140" Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.539 [INFO][4346] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-140" Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.549 [INFO][4346] ipam/ipam.go 511: Trying affinity for 192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.557 [INFO][4346] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.560 [INFO][4346] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.561 [INFO][4346] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" host="172-234-200-140" Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.563 [INFO][4346] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.568 [INFO][4346] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" host="172-234-200-140" Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.573 [INFO][4346] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.196/26] block=192.168.94.192/26 handle="k8s-pod-network.d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" host="172-234-200-140" Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.573 [INFO][4346] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.196/26] handle="k8s-pod-network.d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" host="172-234-200-140" Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.573 [INFO][4346] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:29:44.601419 containerd[1583]: 2026-01-24 00:29:44.573 [INFO][4346] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.196/26] IPv6=[] ContainerID="d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" HandleID="k8s-pod-network.d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" Workload="172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0" Jan 24 00:29:44.600000 audit: BPF prog-id=238 op=LOAD Jan 24 00:29:44.602000 audit: BPF prog-id=239 op=LOAD Jan 24 00:29:44.604098 containerd[1583]: 2026-01-24 00:29:44.576 [INFO][4322] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" Namespace="calico-system" Pod="calico-kube-controllers-547797c6d5-k28bz" WorkloadEndpoint="172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0", GenerateName:"calico-kube-controllers-547797c6d5-", Namespace:"calico-system", SelfLink:"", UID:"1fd846d3-1f7b-431f-b783-0e9765cbef2a", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547797c6d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"", Pod:"calico-kube-controllers-547797c6d5-k28bz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8983060a3e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:44.604098 containerd[1583]: 2026-01-24 00:29:44.576 [INFO][4322] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.196/32] ContainerID="d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" Namespace="calico-system" Pod="calico-kube-controllers-547797c6d5-k28bz" WorkloadEndpoint="172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0" Jan 24 00:29:44.604098 containerd[1583]: 2026-01-24 00:29:44.576 [INFO][4322] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8983060a3e6 ContainerID="d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" Namespace="calico-system" Pod="calico-kube-controllers-547797c6d5-k28bz" WorkloadEndpoint="172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0" Jan 24 00:29:44.604098 containerd[1583]: 2026-01-24 00:29:44.585 [INFO][4322] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" Namespace="calico-system" Pod="calico-kube-controllers-547797c6d5-k28bz" WorkloadEndpoint="172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0" Jan 24 00:29:44.604098 containerd[1583]: 2026-01-24 00:29:44.586 [INFO][4322] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" Namespace="calico-system" Pod="calico-kube-controllers-547797c6d5-k28bz" WorkloadEndpoint="172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0", GenerateName:"calico-kube-controllers-547797c6d5-", Namespace:"calico-system", SelfLink:"", UID:"1fd846d3-1f7b-431f-b783-0e9765cbef2a", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547797c6d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e", Pod:"calico-kube-controllers-547797c6d5-k28bz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8983060a3e6", MAC:"42:ab:22:4d:0e:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:44.604098 containerd[1583]: 2026-01-24 00:29:44.598 [INFO][4322] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" Namespace="calico-system" Pod="calico-kube-controllers-547797c6d5-k28bz" WorkloadEndpoint="172--234--200--140-k8s-calico--kube--controllers--547797c6d5--k28bz-eth0" Jan 24 00:29:44.602000 audit[4392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4380 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646539343862336632643365636235396535633831336537643034 Jan 24 00:29:44.602000 audit: BPF prog-id=239 op=UNLOAD Jan 24 00:29:44.602000 audit[4392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4380 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646539343862336632643365636235396535633831336537643034 Jan 24 00:29:44.603000 audit: BPF prog-id=240 op=LOAD Jan 24 00:29:44.603000 audit[4392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4380 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646539343862336632643365636235396535633831336537643034 Jan 24 00:29:44.603000 audit: BPF prog-id=241 op=LOAD Jan 24 00:29:44.603000 audit[4392]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4380 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646539343862336632643365636235396535633831336537643034 Jan 24 00:29:44.603000 audit: BPF prog-id=241 op=UNLOAD Jan 24 00:29:44.603000 audit[4392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4380 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646539343862336632643365636235396535633831336537643034 Jan 24 00:29:44.603000 audit: BPF prog-id=240 op=UNLOAD Jan 24 00:29:44.603000 audit[4392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4380 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646539343862336632643365636235396535633831336537643034 Jan 24 00:29:44.603000 audit: BPF prog-id=242 op=LOAD Jan 24 00:29:44.603000 audit[4392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4380 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646539343862336632643365636235396535633831336537643034 Jan 24 00:29:44.610000 audit[4414]: NETFILTER_CFG table=filter:129 family=2 entries=17 op=nft_register_rule pid=4414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:44.610000 audit[4414]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe3f8db620 a2=0 a3=7ffe3f8db60c items=0 ppid=2879 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.610000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:44.618000 audit[4414]: NETFILTER_CFG table=nat:130 family=2 entries=35 op=nft_register_chain pid=4414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:44.618000 audit[4414]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe3f8db620 a2=0 a3=7ffe3f8db60c items=0 ppid=2879 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.618000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:44.639808 containerd[1583]: time="2026-01-24T00:29:44.639777188Z" level=info msg="connecting to shim d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e" address="unix:///run/containerd/s/4f74bd3a417653e9145947d91b646d4329c1dc2eeb3619b013ce2de22d3de38b" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:44.669000 audit[4449]: NETFILTER_CFG table=filter:131 family=2 entries=40 op=nft_register_chain pid=4449 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:29:44.669000 audit[4449]: SYSCALL arch=c000003e syscall=46 success=yes exit=20748 a0=3 a1=7ffc560ec470 a2=0 a3=7ffc560ec45c items=0 ppid=4125 pid=4449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.669000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:29:44.680736 containerd[1583]: time="2026-01-24T00:29:44.680596977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569b7b4d59-8g26m,Uid:2078a205-dd8a-4be4-86c7-d47960d16fb3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b7de948b3f2d3ecb59e5c813e7d04d3aeb8324b021090bc906f86c057284f5f4\"" Jan 24 00:29:44.685193 containerd[1583]: time="2026-01-24T00:29:44.685168225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:29:44.686005 systemd[1]: Started cri-containerd-d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e.scope - libcontainer container d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e. Jan 24 00:29:44.701000 audit: BPF prog-id=243 op=LOAD Jan 24 00:29:44.701000 audit: BPF prog-id=244 op=LOAD Jan 24 00:29:44.701000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4430 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435333365376136653138393635623564636563346337653733376530 Jan 24 00:29:44.701000 audit: BPF prog-id=244 op=UNLOAD Jan 24 00:29:44.701000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4430 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435333365376136653138393635623564636563346337653733376530 Jan 24 00:29:44.701000 audit: BPF prog-id=245 op=LOAD Jan 24 00:29:44.701000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4430 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435333365376136653138393635623564636563346337653733376530 Jan 24 00:29:44.702000 audit: BPF prog-id=246 op=LOAD Jan 24 00:29:44.702000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4430 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435333365376136653138393635623564636563346337653733376530 Jan 24 00:29:44.702000 audit: BPF prog-id=246 op=UNLOAD Jan 24 00:29:44.702000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4430 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435333365376136653138393635623564636563346337653733376530 Jan 24 00:29:44.702000 audit: BPF prog-id=245 op=UNLOAD Jan 24 00:29:44.702000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4430 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435333365376136653138393635623564636563346337653733376530 Jan 24 00:29:44.702000 audit: BPF prog-id=247 op=LOAD Jan 24 00:29:44.702000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4430 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:44.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435333365376136653138393635623564636563346337653733376530 Jan 24 00:29:44.742439 containerd[1583]: time="2026-01-24T00:29:44.742374377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547797c6d5-k28bz,Uid:1fd846d3-1f7b-431f-b783-0e9765cbef2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d533e7a6e18965b5dcec4c7e737e0736e88fc83af434393e31f6ad6455822a2e\"" Jan 24 00:29:44.811602 containerd[1583]: time="2026-01-24T00:29:44.811438132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:29:44.812308 containerd[1583]: time="2026-01-24T00:29:44.812250752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:29:44.812506 containerd[1583]: time="2026-01-24T00:29:44.812326992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:44.812700 kubelet[2775]: E0124 00:29:44.812640 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:29:44.813102 kubelet[2775]: E0124 00:29:44.812886 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:29:44.813518 kubelet[2775]: E0124 00:29:44.813237 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95p6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569b7b4d59-8g26m_calico-apiserver(2078a205-dd8a-4be4-86c7-d47960d16fb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:29:44.814033 containerd[1583]: time="2026-01-24T00:29:44.814001151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:29:44.817896 kubelet[2775]: E0124 00:29:44.817845 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:29:44.946824 containerd[1583]: time="2026-01-24T00:29:44.946438514Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:29:44.948234 containerd[1583]: time="2026-01-24T00:29:44.948197814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:29:44.948288 containerd[1583]: time="2026-01-24T00:29:44.948277994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:44.948518 kubelet[2775]: E0124 00:29:44.948462 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:29:44.948518 kubelet[2775]: E0124 00:29:44.948510 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:29:44.948726 kubelet[2775]: E0124 00:29:44.948616 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jl6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-547797c6d5-k28bz_calico-system(1fd846d3-1f7b-431f-b783-0e9765cbef2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:29:44.949798 kubelet[2775]: E0124 00:29:44.949767 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:29:45.045053 systemd-networkd[1487]: vxlan.calico: Gained IPv6LL Jan 24 00:29:45.300799 systemd-networkd[1487]: cali6b00ec0aa87: Gained IPv6LL Jan 24 00:29:45.323836 kubelet[2775]: E0124 00:29:45.323793 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:45.324276 containerd[1583]: time="2026-01-24T00:29:45.324199596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw5xc,Uid:d85e97f3-257f-4333-95c1-a12671983d81,Namespace:kube-system,Attempt:0,}" Jan 24 00:29:45.433372 systemd-networkd[1487]: cali4fa1266a443: Link UP Jan 24 00:29:45.434932 systemd-networkd[1487]: cali4fa1266a443: Gained carrier Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.365 [INFO][4474] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0 coredns-668d6bf9bc- kube-system d85e97f3-257f-4333-95c1-a12671983d81 814 0 2026-01-24 00:29:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-200-140 coredns-668d6bf9bc-jw5xc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4fa1266a443 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw5xc" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-" Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.366 [INFO][4474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw5xc" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0" Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.393 [INFO][4486] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" HandleID="k8s-pod-network.06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" Workload="172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0" Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.393 [INFO][4486] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" HandleID="k8s-pod-network.06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" Workload="172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50a0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-200-140", "pod":"coredns-668d6bf9bc-jw5xc", "timestamp":"2026-01-24 00:29:45.392997771 +0000 UTC"}, Hostname:"172-234-200-140", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.393 [INFO][4486] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.393 [INFO][4486] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.393 [INFO][4486] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-140' Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.399 [INFO][4486] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" host="172-234-200-140" Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.403 [INFO][4486] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-140" Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.407 [INFO][4486] ipam/ipam.go 511: Trying affinity for 192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.409 [INFO][4486] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.413 [INFO][4486] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.413 [INFO][4486] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" host="172-234-200-140" Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.415 [INFO][4486] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.418 [INFO][4486] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" host="172-234-200-140" Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.424 [INFO][4486] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.197/26] block=192.168.94.192/26 handle="k8s-pod-network.06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" host="172-234-200-140" Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.424 [INFO][4486] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.197/26] handle="k8s-pod-network.06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" host="172-234-200-140" Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.424 [INFO][4486] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:29:45.451951 containerd[1583]: 2026-01-24 00:29:45.424 [INFO][4486] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.197/26] IPv6=[] ContainerID="06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" HandleID="k8s-pod-network.06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" Workload="172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0" Jan 24 00:29:45.452591 containerd[1583]: 2026-01-24 00:29:45.428 [INFO][4474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw5xc" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d85e97f3-257f-4333-95c1-a12671983d81", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"", Pod:"coredns-668d6bf9bc-jw5xc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fa1266a443", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:45.452591 containerd[1583]: 2026-01-24 00:29:45.428 [INFO][4474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.197/32] ContainerID="06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw5xc" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0" Jan 24 00:29:45.452591 containerd[1583]: 2026-01-24 00:29:45.428 [INFO][4474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fa1266a443 ContainerID="06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw5xc" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0" Jan 24 00:29:45.452591 containerd[1583]: 2026-01-24 00:29:45.435 [INFO][4474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw5xc" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0" Jan 24 00:29:45.452591 containerd[1583]: 2026-01-24 00:29:45.436 [INFO][4474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw5xc" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d85e97f3-257f-4333-95c1-a12671983d81", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af", Pod:"coredns-668d6bf9bc-jw5xc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fa1266a443", MAC:"06:d9:15:f3:a3:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:45.452591 containerd[1583]: 2026-01-24 00:29:45.446 [INFO][4474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" Namespace="kube-system" Pod="coredns-668d6bf9bc-jw5xc" WorkloadEndpoint="172--234--200--140-k8s-coredns--668d6bf9bc--jw5xc-eth0" Jan 24 00:29:45.474139 containerd[1583]: time="2026-01-24T00:29:45.474077451Z" level=info msg="connecting to shim 06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af" address="unix:///run/containerd/s/5943ec6642fe999663c5f108db42736c91aa11e9ba1c1553c7fc3d3c4b52d4e3" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:45.484034 kubelet[2775]: E0124 00:29:45.484001 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:29:45.485984 kubelet[2775]: E0124 00:29:45.485962 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:45.492671 kubelet[2775]: E0124 00:29:45.487589 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:29:45.513000 audit[4523]: NETFILTER_CFG table=filter:132 family=2 entries=40 op=nft_register_chain pid=4523 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:29:45.513000 audit[4523]: SYSCALL arch=c000003e syscall=46 success=yes exit=20328 a0=3 a1=7fffb81e8280 a2=0 a3=7fffb81e826c items=0 ppid=4125 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.513000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:29:45.522920 systemd[1]: Started cri-containerd-06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af.scope - libcontainer container 06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af. Jan 24 00:29:45.543000 audit[4543]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=4543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:45.543000 audit[4543]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe086d1c90 a2=0 a3=7ffe086d1c7c items=0 ppid=2879 pid=4543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.543000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:45.544000 audit: BPF prog-id=248 op=LOAD Jan 24 00:29:45.545000 audit: BPF prog-id=249 op=LOAD Jan 24 00:29:45.545000 audit[4522]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4509 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036323534343338613161353134623566353761643832336333336236 Jan 24 00:29:45.545000 audit: BPF prog-id=249 op=UNLOAD Jan 24 00:29:45.545000 audit[4522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4509 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036323534343338613161353134623566353761643832336333336236 Jan 24 00:29:45.545000 audit: BPF prog-id=250 op=LOAD Jan 24 00:29:45.545000 audit[4522]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4509 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036323534343338613161353134623566353761643832336333336236 Jan 24 00:29:45.545000 audit: BPF prog-id=251 op=LOAD Jan 24 00:29:45.545000 audit[4522]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4509 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036323534343338613161353134623566353761643832336333336236 Jan 24 00:29:45.545000 audit: BPF prog-id=251 op=UNLOAD Jan 24 00:29:45.545000 audit[4522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4509 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036323534343338613161353134623566353761643832336333336236 Jan 24 00:29:45.545000 audit: BPF prog-id=250 op=UNLOAD Jan 24 00:29:45.545000 audit[4522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4509 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036323534343338613161353134623566353761643832336333336236 Jan 24 00:29:45.545000 audit: BPF prog-id=252 op=LOAD Jan 24 00:29:45.545000 audit[4522]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4509 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036323534343338613161353134623566353761643832336333336236 Jan 24 00:29:45.548000 audit[4543]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=4543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:45.548000 audit[4543]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe086d1c90 a2=0 a3=7ffe086d1c7c items=0 ppid=2879 pid=4543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.548000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:45.593998 containerd[1583]: time="2026-01-24T00:29:45.593811481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw5xc,Uid:d85e97f3-257f-4333-95c1-a12671983d81,Namespace:kube-system,Attempt:0,} returns sandbox id \"06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af\"" Jan 24 00:29:45.596110 kubelet[2775]: E0124 00:29:45.596077 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:45.599689 containerd[1583]: time="2026-01-24T00:29:45.599374658Z" level=info msg="CreateContainer within sandbox \"06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:29:45.609793 containerd[1583]: time="2026-01-24T00:29:45.609744083Z" level=info msg="Container 8a339c59d5ad59a57fdb46ae373a80ca6ed0b13b2b8c6fa6c40ebbb0125138e5: CDI devices from CRI Config.CDIDevices: []" Jan 24 00:29:45.617962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765521645.mount: Deactivated successfully. Jan 24 00:29:45.620247 containerd[1583]: time="2026-01-24T00:29:45.620157918Z" level=info msg="CreateContainer within sandbox \"06254438a1a514b5f57ad823c33b6ce6e4f5c68f9caa749252188984cc6d72af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a339c59d5ad59a57fdb46ae373a80ca6ed0b13b2b8c6fa6c40ebbb0125138e5\"" Jan 24 00:29:45.621004 containerd[1583]: time="2026-01-24T00:29:45.620855487Z" level=info msg="StartContainer for \"8a339c59d5ad59a57fdb46ae373a80ca6ed0b13b2b8c6fa6c40ebbb0125138e5\"" Jan 24 00:29:45.622440 containerd[1583]: time="2026-01-24T00:29:45.622412476Z" level=info msg="connecting to shim 8a339c59d5ad59a57fdb46ae373a80ca6ed0b13b2b8c6fa6c40ebbb0125138e5" address="unix:///run/containerd/s/5943ec6642fe999663c5f108db42736c91aa11e9ba1c1553c7fc3d3c4b52d4e3" protocol=ttrpc version=3 Jan 24 00:29:45.646847 systemd[1]: Started cri-containerd-8a339c59d5ad59a57fdb46ae373a80ca6ed0b13b2b8c6fa6c40ebbb0125138e5.scope - libcontainer container 8a339c59d5ad59a57fdb46ae373a80ca6ed0b13b2b8c6fa6c40ebbb0125138e5. Jan 24 00:29:45.661000 audit: BPF prog-id=253 op=LOAD Jan 24 00:29:45.663000 audit: BPF prog-id=254 op=LOAD Jan 24 00:29:45.663000 audit[4550]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4509 pid=4550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861333339633539643561643539613537666462343661653337336138 Jan 24 00:29:45.663000 audit: BPF prog-id=254 op=UNLOAD Jan 24 00:29:45.663000 audit[4550]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4509 pid=4550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861333339633539643561643539613537666462343661653337336138 Jan 24 00:29:45.664000 audit: BPF prog-id=255 op=LOAD Jan 24 00:29:45.664000 audit[4550]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4509 pid=4550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861333339633539643561643539613537666462343661653337336138 Jan 24 00:29:45.664000 audit: BPF prog-id=256 op=LOAD Jan 24 00:29:45.664000 audit[4550]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4509 pid=4550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861333339633539643561643539613537666462343661653337336138 Jan 24 00:29:45.664000 audit: BPF prog-id=256 op=UNLOAD Jan 24 00:29:45.664000 audit[4550]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4509 pid=4550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861333339633539643561643539613537666462343661653337336138 Jan 24 00:29:45.664000 audit: BPF prog-id=255 op=UNLOAD Jan 24 00:29:45.664000 audit[4550]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4509 pid=4550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861333339633539643561643539613537666462343661653337336138 Jan 24 00:29:45.665000 audit: BPF prog-id=257 op=LOAD Jan 24 00:29:45.665000 audit[4550]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4509 pid=4550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:45.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861333339633539643561643539613537666462343661653337336138 Jan 24 00:29:45.691521 containerd[1583]: time="2026-01-24T00:29:45.691461482Z" level=info msg="StartContainer for \"8a339c59d5ad59a57fdb46ae373a80ca6ed0b13b2b8c6fa6c40ebbb0125138e5\" returns successfully" Jan 24 00:29:46.326763 containerd[1583]: time="2026-01-24T00:29:46.326612824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-b2qbk,Uid:fe31be8c-c3f6-4683-9bda-2dbdf7783a1a,Namespace:calico-system,Attempt:0,}" Jan 24 00:29:46.390109 systemd-networkd[1487]: cali6741517d56d: Gained IPv6LL Jan 24 00:29:46.475337 systemd-networkd[1487]: calicee89f536bd: Link UP Jan 24 00:29:46.476732 systemd-networkd[1487]: calicee89f536bd: Gained carrier Jan 24 00:29:46.491541 kubelet[2775]: E0124 00:29:46.491481 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:46.495111 kubelet[2775]: E0124 00:29:46.494153 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:46.495111 kubelet[2775]: E0124 00:29:46.494966 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:29:46.496242 kubelet[2775]: E0124 00:29:46.496126 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.374 [INFO][4582] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0 goldmane-666569f655- calico-system fe31be8c-c3f6-4683-9bda-2dbdf7783a1a 815 0 2026-01-24 00:29:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-234-200-140 goldmane-666569f655-b2qbk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicee89f536bd [] [] }} ContainerID="c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" Namespace="calico-system" Pod="goldmane-666569f655-b2qbk" WorkloadEndpoint="172--234--200--140-k8s-goldmane--666569f655--b2qbk-" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.374 [INFO][4582] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" Namespace="calico-system" Pod="goldmane-666569f655-b2qbk" WorkloadEndpoint="172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.429 [INFO][4595] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" HandleID="k8s-pod-network.c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" Workload="172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.429 [INFO][4595] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" HandleID="k8s-pod-network.c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" Workload="172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-200-140", "pod":"goldmane-666569f655-b2qbk", "timestamp":"2026-01-24 00:29:46.429259213 +0000 UTC"}, Hostname:"172-234-200-140", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.429 [INFO][4595] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.429 [INFO][4595] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.429 [INFO][4595] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-140' Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.438 [INFO][4595] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" host="172-234-200-140" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.443 [INFO][4595] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-140" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.450 [INFO][4595] ipam/ipam.go 511: Trying affinity for 192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.452 [INFO][4595] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.455 [INFO][4595] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.455 [INFO][4595] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" host="172-234-200-140" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.456 [INFO][4595] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45 Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.460 [INFO][4595] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" host="172-234-200-140" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.467 [INFO][4595] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.198/26] block=192.168.94.192/26 handle="k8s-pod-network.c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" host="172-234-200-140" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.467 [INFO][4595] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.198/26] handle="k8s-pod-network.c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" host="172-234-200-140" Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.467 [INFO][4595] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:29:46.508460 containerd[1583]: 2026-01-24 00:29:46.467 [INFO][4595] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.198/26] IPv6=[] ContainerID="c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" HandleID="k8s-pod-network.c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" Workload="172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0" Jan 24 00:29:46.509349 containerd[1583]: 2026-01-24 00:29:46.470 [INFO][4582] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" Namespace="calico-system" Pod="goldmane-666569f655-b2qbk" WorkloadEndpoint="172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fe31be8c-c3f6-4683-9bda-2dbdf7783a1a", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"", Pod:"goldmane-666569f655-b2qbk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.94.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicee89f536bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:46.509349 containerd[1583]: 2026-01-24 00:29:46.470 [INFO][4582] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.198/32] ContainerID="c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" Namespace="calico-system" Pod="goldmane-666569f655-b2qbk" WorkloadEndpoint="172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0" Jan 24 00:29:46.509349 containerd[1583]: 2026-01-24 00:29:46.470 [INFO][4582] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicee89f536bd ContainerID="c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" Namespace="calico-system" Pod="goldmane-666569f655-b2qbk" WorkloadEndpoint="172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0" Jan 24 00:29:46.509349 containerd[1583]: 2026-01-24 00:29:46.477 [INFO][4582] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" Namespace="calico-system" Pod="goldmane-666569f655-b2qbk" WorkloadEndpoint="172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0" Jan 24 00:29:46.509349 containerd[1583]: 2026-01-24 00:29:46.478 [INFO][4582] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" Namespace="calico-system" Pod="goldmane-666569f655-b2qbk" WorkloadEndpoint="172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fe31be8c-c3f6-4683-9bda-2dbdf7783a1a", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45", Pod:"goldmane-666569f655-b2qbk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.94.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicee89f536bd", MAC:"42:81:5b:b3:00:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:46.509349 containerd[1583]: 2026-01-24 00:29:46.495 [INFO][4582] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" Namespace="calico-system" Pod="goldmane-666569f655-b2qbk" WorkloadEndpoint="172--234--200--140-k8s-goldmane--666569f655--b2qbk-eth0" Jan 24 00:29:46.535366 kubelet[2775]: I0124 00:29:46.535170 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jw5xc" podStartSLOduration=35.53515618 podStartE2EDuration="35.53515618s" podCreationTimestamp="2026-01-24 00:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:29:46.53502674 +0000 UTC m=+40.298338927" watchObservedRunningTime="2026-01-24 00:29:46.53515618 +0000 UTC m=+40.298468367" Jan 24 00:29:46.556439 containerd[1583]: time="2026-01-24T00:29:46.556395149Z" level=info msg="connecting to shim c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45" address="unix:///run/containerd/s/074c7469061cb6f5c28a1766d49bb6269d93c78d0ccddc2a002659a3bd85b870" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:46.580802 systemd-networkd[1487]: cali8983060a3e6: Gained IPv6LL Jan 24 00:29:46.609101 systemd[1]: Started cri-containerd-c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45.scope - libcontainer container c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45. Jan 24 00:29:46.616000 audit[4641]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=4641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:46.616000 audit[4641]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd2204b4d0 a2=0 a3=7ffd2204b4bc items=0 ppid=2879 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:46.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:46.624000 audit[4641]: NETFILTER_CFG table=nat:136 family=2 entries=44 op=nft_register_rule pid=4641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:46.624000 audit[4641]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd2204b4d0 a2=0 a3=7ffd2204b4bc items=0 ppid=2879 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:46.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:46.664000 audit: BPF prog-id=258 op=LOAD Jan 24 00:29:46.665000 audit: BPF prog-id=259 op=LOAD Jan 24 00:29:46.665000 audit[4629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4618 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:46.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331643637316132376562613866623831666531666339646566343335 Jan 24 00:29:46.665000 audit: BPF prog-id=259 op=UNLOAD Jan 24 00:29:46.665000 audit[4629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4618 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:46.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331643637316132376562613866623831666531666339646566343335 Jan 24 00:29:46.665000 audit: BPF prog-id=260 op=LOAD Jan 24 00:29:46.665000 audit[4629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4618 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:46.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331643637316132376562613866623831666531666339646566343335 Jan 24 00:29:46.665000 audit: BPF prog-id=261 op=LOAD Jan 24 00:29:46.665000 audit[4629]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4618 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:46.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331643637316132376562613866623831666531666339646566343335 Jan 24 00:29:46.665000 audit: BPF prog-id=261 op=UNLOAD Jan 24 00:29:46.665000 audit[4629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4618 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:46.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331643637316132376562613866623831666531666339646566343335 Jan 24 00:29:46.665000 audit: BPF prog-id=260 op=UNLOAD Jan 24 00:29:46.665000 audit[4629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4618 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:46.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331643637316132376562613866623831666531666339646566343335 Jan 24 00:29:46.665000 audit: BPF prog-id=262 op=LOAD Jan 24 00:29:46.665000 audit[4629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4618 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:46.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331643637316132376562613866623831666531666339646566343335 Jan 24 00:29:46.678000 audit[4642]: NETFILTER_CFG table=filter:137 family=2 entries=56 op=nft_register_chain pid=4642 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:29:46.678000 audit[4642]: SYSCALL arch=c000003e syscall=46 success=yes exit=28728 a0=3 a1=7ffc85516dd0 a2=0 a3=7ffc85516dbc items=0 ppid=4125 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:46.678000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:29:46.772920 systemd-networkd[1487]: cali4fa1266a443: Gained IPv6LL Jan 24 00:29:46.777478 containerd[1583]: time="2026-01-24T00:29:46.777450039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-b2qbk,Uid:fe31be8c-c3f6-4683-9bda-2dbdf7783a1a,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1d671a27eba8fb81fe1fc9def43598f5a1763da3aa65986fc25ee36393ebd45\"" Jan 24 00:29:46.779979 containerd[1583]: time="2026-01-24T00:29:46.779920107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:29:46.908728 containerd[1583]: time="2026-01-24T00:29:46.908593263Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:29:46.909337 containerd[1583]: time="2026-01-24T00:29:46.909309123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:29:46.909403 containerd[1583]: time="2026-01-24T00:29:46.909379033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:46.909805 kubelet[2775]: E0124 00:29:46.909771 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:29:46.909889 kubelet[2775]: E0124 00:29:46.909816 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:29:46.909963 kubelet[2775]: E0124 00:29:46.909915 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5rh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-b2qbk_calico-system(fe31be8c-c3f6-4683-9bda-2dbdf7783a1a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:29:46.911314 kubelet[2775]: E0124 00:29:46.911268 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:29:47.495440 kubelet[2775]: E0124 00:29:47.495404 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:47.495946 kubelet[2775]: E0124 00:29:47.495838 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:29:47.678000 audit[4657]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:47.681105 kernel: kauditd_printk_skb: 387 callbacks suppressed Jan 24 00:29:47.681157 kernel: audit: type=1325 audit(1769214587.678:748): table=filter:138 family=2 entries=14 op=nft_register_rule pid=4657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:47.678000 audit[4657]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd8235f910 a2=0 a3=7ffd8235f8fc items=0 ppid=2879 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:47.689005 kernel: audit: type=1300 audit(1769214587.678:748): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd8235f910 a2=0 a3=7ffd8235f8fc items=0 ppid=2879 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:47.700678 kernel: audit: type=1327 audit(1769214587.678:748): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:47.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:47.698000 audit[4657]: NETFILTER_CFG table=nat:139 family=2 entries=56 op=nft_register_chain pid=4657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:47.706691 kernel: audit: type=1325 audit(1769214587.698:749): table=nat:139 family=2 entries=56 op=nft_register_chain pid=4657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:47.698000 audit[4657]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd8235f910 a2=0 a3=7ffd8235f8fc items=0 ppid=2879 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:47.718254 kernel: audit: type=1300 audit(1769214587.698:749): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd8235f910 a2=0 a3=7ffd8235f8fc items=0 ppid=2879 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:47.718312 kernel: audit: type=1327 audit(1769214587.698:749): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:47.698000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:48.063648 systemd-networkd[1487]: calicee89f536bd: Gained IPv6LL Jan 24 00:29:48.325625 containerd[1583]: time="2026-01-24T00:29:48.325151555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569b7b4d59-5k5g4,Uid:cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:29:48.326530 containerd[1583]: time="2026-01-24T00:29:48.326484524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dzs24,Uid:9b58f3e4-7a56-472e-b647-710386b89b86,Namespace:calico-system,Attempt:0,}" Jan 24 00:29:48.465109 systemd-networkd[1487]: calide11c4a8f31: Link UP Jan 24 00:29:48.467054 systemd-networkd[1487]: calide11c4a8f31: Gained carrier Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.383 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0 calico-apiserver-569b7b4d59- calico-apiserver cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b 811 0 2026-01-24 00:29:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:569b7b4d59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-200-140 calico-apiserver-569b7b4d59-5k5g4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calide11c4a8f31 [] [] }} ContainerID="34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-5k5g4" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-" Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.384 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-5k5g4" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0" Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.415 [INFO][4685] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" HandleID="k8s-pod-network.34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" Workload="172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0" Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.416 [INFO][4685] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" HandleID="k8s-pod-network.34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" Workload="172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-200-140", "pod":"calico-apiserver-569b7b4d59-5k5g4", "timestamp":"2026-01-24 00:29:48.415841729 +0000 UTC"}, Hostname:"172-234-200-140", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.416 [INFO][4685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.416 [INFO][4685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.416 [INFO][4685] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-140' Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.425 [INFO][4685] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" host="172-234-200-140" Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.430 [INFO][4685] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-140" Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.435 [INFO][4685] ipam/ipam.go 511: Trying affinity for 192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.439 [INFO][4685] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.442 [INFO][4685] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.442 [INFO][4685] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" host="172-234-200-140" Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.444 [INFO][4685] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64 Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.449 [INFO][4685] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" host="172-234-200-140" Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.454 [INFO][4685] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.199/26] block=192.168.94.192/26 handle="k8s-pod-network.34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" host="172-234-200-140" Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.454 [INFO][4685] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.199/26] handle="k8s-pod-network.34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" host="172-234-200-140" Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.454 [INFO][4685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:29:48.478462 containerd[1583]: 2026-01-24 00:29:48.455 [INFO][4685] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.199/26] IPv6=[] ContainerID="34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" HandleID="k8s-pod-network.34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" Workload="172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0" Jan 24 00:29:48.479309 containerd[1583]: 2026-01-24 00:29:48.460 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-5k5g4" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0", GenerateName:"calico-apiserver-569b7b4d59-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569b7b4d59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"", Pod:"calico-apiserver-569b7b4d59-5k5g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide11c4a8f31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:48.479309 containerd[1583]: 2026-01-24 00:29:48.461 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.199/32] ContainerID="34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-5k5g4" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0" Jan 24 00:29:48.479309 containerd[1583]: 2026-01-24 00:29:48.461 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide11c4a8f31 ContainerID="34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-5k5g4" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0" Jan 24 00:29:48.479309 containerd[1583]: 2026-01-24 00:29:48.465 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-5k5g4" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0" Jan 24 00:29:48.479309 containerd[1583]: 2026-01-24 00:29:48.466 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-5k5g4" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0", GenerateName:"calico-apiserver-569b7b4d59-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569b7b4d59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64", Pod:"calico-apiserver-569b7b4d59-5k5g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide11c4a8f31", MAC:"0a:8d:a7:a6:58:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:48.479309 containerd[1583]: 2026-01-24 00:29:48.474 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" Namespace="calico-apiserver" Pod="calico-apiserver-569b7b4d59-5k5g4" WorkloadEndpoint="172--234--200--140-k8s-calico--apiserver--569b7b4d59--5k5g4-eth0" Jan 24 00:29:48.498963 kubelet[2775]: E0124 00:29:48.498928 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:48.502683 kubelet[2775]: E0124 00:29:48.502566 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:29:48.517509 containerd[1583]: time="2026-01-24T00:29:48.517021649Z" level=info msg="connecting to shim 34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64" address="unix:///run/containerd/s/f742359dbe2c06434ee2fddad3836a2b7885c21ed623e76faae6aa171aa1ce5c" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:48.527000 audit[4719]: NETFILTER_CFG table=filter:140 family=2 entries=53 op=nft_register_chain pid=4719 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:29:48.535976 kernel: audit: type=1325 audit(1769214588.527:750): table=filter:140 family=2 entries=53 op=nft_register_chain pid=4719 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:29:48.527000 audit[4719]: SYSCALL arch=c000003e syscall=46 success=yes exit=26624 a0=3 a1=7ffc5789fbc0 a2=0 a3=7ffc5789fbac items=0 ppid=4125 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.544717 kernel: audit: type=1300 audit(1769214588.527:750): arch=c000003e syscall=46 success=yes exit=26624 a0=3 a1=7ffc5789fbc0 a2=0 a3=7ffc5789fbac items=0 ppid=4125 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.527000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:29:48.556669 kernel: audit: type=1327 audit(1769214588.527:750): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:29:48.582941 systemd[1]: Started cri-containerd-34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64.scope - libcontainer container 34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64. Jan 24 00:29:48.600887 systemd-networkd[1487]: cali218fd5c2987: Link UP Jan 24 00:29:48.601946 systemd-networkd[1487]: cali218fd5c2987: Gained carrier Jan 24 00:29:48.617000 audit: BPF prog-id=263 op=LOAD Jan 24 00:29:48.621694 kernel: audit: type=1334 audit(1769214588.617:751): prog-id=263 op=LOAD Jan 24 00:29:48.623000 audit: BPF prog-id=264 op=LOAD Jan 24 00:29:48.623000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4714 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334626265373366643961663539663333646262646331366639366565 Jan 24 00:29:48.623000 audit: BPF prog-id=264 op=UNLOAD Jan 24 00:29:48.623000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4714 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334626265373366643961663539663333646262646331366639366565 Jan 24 00:29:48.624000 audit: BPF prog-id=265 op=LOAD Jan 24 00:29:48.624000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4714 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334626265373366643961663539663333646262646331366639366565 Jan 24 00:29:48.627000 audit: BPF prog-id=266 op=LOAD Jan 24 00:29:48.627000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4714 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334626265373366643961663539663333646262646331366639366565 Jan 24 00:29:48.627000 audit: BPF prog-id=266 op=UNLOAD Jan 24 00:29:48.627000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4714 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334626265373366643961663539663333646262646331366639366565 Jan 24 00:29:48.627000 audit: BPF prog-id=265 op=UNLOAD Jan 24 00:29:48.627000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4714 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334626265373366643961663539663333646262646331366639366565 Jan 24 00:29:48.627000 audit: BPF prog-id=267 op=LOAD Jan 24 00:29:48.627000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4714 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334626265373366643961663539663333646262646331366639366565 Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.385 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--140-k8s-csi--node--driver--dzs24-eth0 csi-node-driver- calico-system 9b58f3e4-7a56-472e-b647-710386b89b86 712 0 2026-01-24 00:29:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-200-140 csi-node-driver-dzs24 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali218fd5c2987 [] [] }} ContainerID="8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" Namespace="calico-system" Pod="csi-node-driver-dzs24" WorkloadEndpoint="172--234--200--140-k8s-csi--node--driver--dzs24-" Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.385 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" Namespace="calico-system" Pod="csi-node-driver-dzs24" WorkloadEndpoint="172--234--200--140-k8s-csi--node--driver--dzs24-eth0" Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.447 [INFO][4690] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" HandleID="k8s-pod-network.8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" Workload="172--234--200--140-k8s-csi--node--driver--dzs24-eth0" Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.447 [INFO][4690] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" HandleID="k8s-pod-network.8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" Workload="172--234--200--140-k8s-csi--node--driver--dzs24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-200-140", "pod":"csi-node-driver-dzs24", "timestamp":"2026-01-24 00:29:48.447477694 +0000 UTC"}, Hostname:"172-234-200-140", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.447 [INFO][4690] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.454 [INFO][4690] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.454 [INFO][4690] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-140' Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.527 [INFO][4690] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" host="172-234-200-140" Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.535 [INFO][4690] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-140" Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.546 [INFO][4690] ipam/ipam.go 511: Trying affinity for 192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.558 [INFO][4690] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.562 [INFO][4690] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="172-234-200-140" Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.562 [INFO][4690] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" host="172-234-200-140" Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.564 [INFO][4690] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4 Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.575 [INFO][4690] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" host="172-234-200-140" Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.586 [INFO][4690] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.200/26] block=192.168.94.192/26 handle="k8s-pod-network.8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" host="172-234-200-140" Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.587 [INFO][4690] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.200/26] handle="k8s-pod-network.8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" host="172-234-200-140" Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.587 [INFO][4690] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:29:48.634160 containerd[1583]: 2026-01-24 00:29:48.587 [INFO][4690] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.200/26] IPv6=[] ContainerID="8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" HandleID="k8s-pod-network.8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" Workload="172--234--200--140-k8s-csi--node--driver--dzs24-eth0" Jan 24 00:29:48.634968 containerd[1583]: 2026-01-24 00:29:48.596 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" Namespace="calico-system" Pod="csi-node-driver-dzs24" WorkloadEndpoint="172--234--200--140-k8s-csi--node--driver--dzs24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-csi--node--driver--dzs24-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b58f3e4-7a56-472e-b647-710386b89b86", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"", Pod:"csi-node-driver-dzs24", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali218fd5c2987", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:48.634968 containerd[1583]: 2026-01-24 00:29:48.596 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.200/32] ContainerID="8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" Namespace="calico-system" Pod="csi-node-driver-dzs24" WorkloadEndpoint="172--234--200--140-k8s-csi--node--driver--dzs24-eth0" Jan 24 00:29:48.634968 containerd[1583]: 2026-01-24 00:29:48.596 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali218fd5c2987 ContainerID="8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" Namespace="calico-system" Pod="csi-node-driver-dzs24" WorkloadEndpoint="172--234--200--140-k8s-csi--node--driver--dzs24-eth0" Jan 24 00:29:48.634968 containerd[1583]: 2026-01-24 00:29:48.602 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" Namespace="calico-system" Pod="csi-node-driver-dzs24" WorkloadEndpoint="172--234--200--140-k8s-csi--node--driver--dzs24-eth0" Jan 24 00:29:48.634968 containerd[1583]: 2026-01-24 00:29:48.603 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" Namespace="calico-system" Pod="csi-node-driver-dzs24" WorkloadEndpoint="172--234--200--140-k8s-csi--node--driver--dzs24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--140-k8s-csi--node--driver--dzs24-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b58f3e4-7a56-472e-b647-710386b89b86", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-140", ContainerID:"8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4", Pod:"csi-node-driver-dzs24", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali218fd5c2987", MAC:"e2:13:fd:d9:74:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:29:48.634968 containerd[1583]: 2026-01-24 00:29:48.623 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" Namespace="calico-system" Pod="csi-node-driver-dzs24" WorkloadEndpoint="172--234--200--140-k8s-csi--node--driver--dzs24-eth0" Jan 24 00:29:48.667408 containerd[1583]: time="2026-01-24T00:29:48.667310034Z" level=info msg="connecting to shim 8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4" address="unix:///run/containerd/s/3560512bf83db7c034c7fba8d2d2b3ac89dcbb70137585204a301e17c6170a5b" namespace=k8s.io protocol=ttrpc version=3 Jan 24 00:29:48.694834 systemd[1]: Started cri-containerd-8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4.scope - libcontainer container 8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4. Jan 24 00:29:48.706181 containerd[1583]: time="2026-01-24T00:29:48.706070054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569b7b4d59-5k5g4,Uid:cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"34bbe73fd9af59f33dbbdc16f96ee4f3dcadff18dcf55f2e514513b1e14c7c64\"" Jan 24 00:29:48.708635 containerd[1583]: time="2026-01-24T00:29:48.708308043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:29:48.716000 audit[4800]: NETFILTER_CFG table=filter:141 family=2 entries=56 op=nft_register_chain pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 24 00:29:48.716000 audit[4800]: SYSCALL arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7fff7d1f2760 a2=0 a3=7fff7d1f274c items=0 ppid=4125 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.716000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 24 00:29:48.730000 audit: BPF prog-id=268 op=LOAD Jan 24 00:29:48.731000 audit: BPF prog-id=269 op=LOAD Jan 24 00:29:48.731000 audit[4777]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4766 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303034623439303732343132386132313832396661666337343562 Jan 24 00:29:48.731000 audit: BPF prog-id=269 op=UNLOAD Jan 24 00:29:48.731000 audit[4777]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4766 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303034623439303732343132386132313832396661666337343562 Jan 24 00:29:48.731000 audit: BPF prog-id=270 op=LOAD Jan 24 00:29:48.731000 audit[4777]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4766 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303034623439303732343132386132313832396661666337343562 Jan 24 00:29:48.731000 audit: BPF prog-id=271 op=LOAD Jan 24 00:29:48.731000 audit[4777]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4766 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303034623439303732343132386132313832396661666337343562 Jan 24 00:29:48.732000 audit: BPF prog-id=271 op=UNLOAD Jan 24 00:29:48.732000 audit[4777]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4766 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303034623439303732343132386132313832396661666337343562 Jan 24 00:29:48.734000 audit: BPF prog-id=270 op=UNLOAD Jan 24 00:29:48.734000 audit[4777]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4766 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303034623439303732343132386132313832396661666337343562 Jan 24 00:29:48.734000 audit: BPF prog-id=272 op=LOAD Jan 24 00:29:48.734000 audit[4777]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4766 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:48.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866303034623439303732343132386132313832396661666337343562 Jan 24 00:29:48.752594 containerd[1583]: time="2026-01-24T00:29:48.752566231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dzs24,Uid:9b58f3e4-7a56-472e-b647-710386b89b86,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f004b490724128a21829fafc745b8faeddf6fd22b0e52fdb900b1923b55f2b4\"" Jan 24 00:29:48.831960 containerd[1583]: time="2026-01-24T00:29:48.831906641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:29:48.832900 containerd[1583]: time="2026-01-24T00:29:48.832871591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:29:48.833101 containerd[1583]: time="2026-01-24T00:29:48.832939101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:48.833196 kubelet[2775]: E0124 00:29:48.833105 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:29:48.833196 kubelet[2775]: E0124 00:29:48.833146 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:29:48.833387 kubelet[2775]: E0124 00:29:48.833343 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6s5cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569b7b4d59-5k5g4_calico-apiserver(cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:29:48.833714 containerd[1583]: time="2026-01-24T00:29:48.833524840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:29:48.834573 kubelet[2775]: E0124 00:29:48.834544 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:29:48.958469 containerd[1583]: time="2026-01-24T00:29:48.958198258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:29:48.959822 containerd[1583]: time="2026-01-24T00:29:48.959750467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:29:48.960075 containerd[1583]: time="2026-01-24T00:29:48.959802037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:48.960328 kubelet[2775]: E0124 00:29:48.960262 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:29:48.960328 kubelet[2775]: E0124 00:29:48.960304 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:29:48.960459 kubelet[2775]: E0124 00:29:48.960400 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7qvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzs24_calico-system(9b58f3e4-7a56-472e-b647-710386b89b86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:29:48.963289 containerd[1583]: time="2026-01-24T00:29:48.963262736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:29:49.089391 containerd[1583]: time="2026-01-24T00:29:49.089340233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:29:49.090370 containerd[1583]: time="2026-01-24T00:29:49.090323452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:29:49.090458 containerd[1583]: time="2026-01-24T00:29:49.090393732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:49.090910 kubelet[2775]: E0124 00:29:49.090859 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:29:49.091367 kubelet[2775]: E0124 00:29:49.090924 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:29:49.091367 kubelet[2775]: E0124 00:29:49.091049 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7qvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzs24_calico-system(9b58f3e4-7a56-472e-b647-710386b89b86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:29:49.092312 kubelet[2775]: E0124 00:29:49.092254 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:29:49.507970 kubelet[2775]: E0124 00:29:49.507932 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:29:49.510008 kubelet[2775]: E0124 00:29:49.509968 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:29:49.541000 audit[4813]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:49.541000 audit[4813]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdaabf5cd0 a2=0 a3=7ffdaabf5cbc items=0 ppid=2879 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:49.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:49.546000 audit[4813]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=4813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:29:49.546000 audit[4813]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdaabf5cd0 a2=0 a3=7ffdaabf5cbc items=0 ppid=2879 pid=4813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:29:49.546000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:29:49.908860 systemd-networkd[1487]: cali218fd5c2987: Gained IPv6LL Jan 24 00:29:49.909698 systemd-networkd[1487]: calide11c4a8f31: Gained IPv6LL Jan 24 00:29:50.512754 kubelet[2775]: E0124 00:29:50.512308 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:29:50.513570 kubelet[2775]: E0124 00:29:50.513446 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:29:51.325773 containerd[1583]: time="2026-01-24T00:29:51.325723036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:29:51.450586 containerd[1583]: time="2026-01-24T00:29:51.450525452Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:29:51.451438 containerd[1583]: time="2026-01-24T00:29:51.451411914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:29:51.451528 containerd[1583]: time="2026-01-24T00:29:51.451472704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:51.451695 kubelet[2775]: E0124 00:29:51.451638 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:29:51.451749 kubelet[2775]: E0124 00:29:51.451704 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:29:51.451891 kubelet[2775]: E0124 00:29:51.451793 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:afba9b21ebd04ac3a3f6ee43ebc36fd5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xq9c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df9447cd6-v4qqr_calico-system(3a284cf5-2a50-4ae3-ac95-941e18d4d850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:29:51.454833 containerd[1583]: time="2026-01-24T00:29:51.454766025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:29:51.597443 containerd[1583]: time="2026-01-24T00:29:51.597302345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:29:51.598841 containerd[1583]: time="2026-01-24T00:29:51.598735380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:29:51.598841 containerd[1583]: time="2026-01-24T00:29:51.598817890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:51.599194 kubelet[2775]: E0124 00:29:51.599070 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:29:51.599903 kubelet[2775]: E0124 00:29:51.599167 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:29:51.600177 kubelet[2775]: E0124 00:29:51.599810 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xq9c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df9447cd6-v4qqr_calico-system(3a284cf5-2a50-4ae3-ac95-941e18d4d850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:29:51.601642 kubelet[2775]: E0124 00:29:51.601610 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:29:55.920224 kubelet[2775]: I0124 00:29:55.920129 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:29:55.920725 kubelet[2775]: E0124 00:29:55.920532 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:56.522987 kubelet[2775]: E0124 00:29:56.522914 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:29:57.325449 containerd[1583]: time="2026-01-24T00:29:57.325366925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:29:57.456547 containerd[1583]: time="2026-01-24T00:29:57.456498399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:29:57.457362 containerd[1583]: time="2026-01-24T00:29:57.457306051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:29:57.458256 containerd[1583]: time="2026-01-24T00:29:57.457385690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:57.458343 kubelet[2775]: E0124 00:29:57.457568 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:29:57.458343 kubelet[2775]: E0124 00:29:57.457626 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:29:57.458343 kubelet[2775]: E0124 00:29:57.457889 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jl6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-547797c6d5-k28bz_calico-system(1fd846d3-1f7b-431f-b783-0e9765cbef2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:29:57.459234 kubelet[2775]: E0124 00:29:57.459202 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:29:58.327806 containerd[1583]: time="2026-01-24T00:29:58.327505996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:29:58.456267 containerd[1583]: time="2026-01-24T00:29:58.456217877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:29:58.457307 containerd[1583]: time="2026-01-24T00:29:58.457174248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:29:58.457307 containerd[1583]: time="2026-01-24T00:29:58.457208948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:29:58.457477 kubelet[2775]: E0124 00:29:58.457428 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:29:58.457477 kubelet[2775]: E0124 00:29:58.457472 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:29:58.457628 kubelet[2775]: E0124 00:29:58.457572 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95p6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569b7b4d59-8g26m_calico-apiserver(2078a205-dd8a-4be4-86c7-d47960d16fb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:29:58.458842 kubelet[2775]: E0124 00:29:58.458804 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:30:01.324409 containerd[1583]: time="2026-01-24T00:30:01.324347438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:30:01.563900 containerd[1583]: time="2026-01-24T00:30:01.563847699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:30:01.565267 containerd[1583]: time="2026-01-24T00:30:01.565214040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:30:01.565267 containerd[1583]: time="2026-01-24T00:30:01.565239081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:30:01.565586 kubelet[2775]: E0124 00:30:01.565506 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:30:01.565986 kubelet[2775]: E0124 00:30:01.565595 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:30:01.566248 kubelet[2775]: E0124 00:30:01.566171 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5rh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-b2qbk_calico-system(fe31be8c-c3f6-4683-9bda-2dbdf7783a1a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:01.567935 kubelet[2775]: E0124 00:30:01.567899 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:30:02.326268 containerd[1583]: time="2026-01-24T00:30:02.325947445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:30:02.463588 containerd[1583]: time="2026-01-24T00:30:02.463410459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:30:02.464315 containerd[1583]: time="2026-01-24T00:30:02.464278369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:30:02.464511 containerd[1583]: time="2026-01-24T00:30:02.464376879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:30:02.464953 kubelet[2775]: E0124 00:30:02.464903 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:30:02.465078 kubelet[2775]: E0124 00:30:02.465059 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:30:02.465395 kubelet[2775]: E0124 00:30:02.465338 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7qvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzs24_calico-system(9b58f3e4-7a56-472e-b647-710386b89b86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:02.467353 containerd[1583]: time="2026-01-24T00:30:02.467323714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:30:02.608619 containerd[1583]: time="2026-01-24T00:30:02.608460352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:30:02.609642 containerd[1583]: time="2026-01-24T00:30:02.609592464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:30:02.609770 containerd[1583]: time="2026-01-24T00:30:02.609677814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:30:02.609847 kubelet[2775]: E0124 00:30:02.609816 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:30:02.610360 kubelet[2775]: E0124 00:30:02.609860 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:30:02.610360 kubelet[2775]: E0124 00:30:02.609954 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7qvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzs24_calico-system(9b58f3e4-7a56-472e-b647-710386b89b86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:02.611362 kubelet[2775]: E0124 00:30:02.611333 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:30:05.327144 containerd[1583]: time="2026-01-24T00:30:05.327080206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:30:05.329959 kubelet[2775]: E0124 00:30:05.329810 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:30:05.524705 containerd[1583]: time="2026-01-24T00:30:05.524578045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:30:05.526354 containerd[1583]: time="2026-01-24T00:30:05.525868626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:30:05.526354 containerd[1583]: time="2026-01-24T00:30:05.525954246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:30:05.526445 kubelet[2775]: E0124 00:30:05.526099 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:05.526445 kubelet[2775]: E0124 00:30:05.526140 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:05.526445 kubelet[2775]: E0124 00:30:05.526260 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6s5cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569b7b4d59-5k5g4_calico-apiserver(cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:05.527700 kubelet[2775]: E0124 00:30:05.527627 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:30:10.327750 kubelet[2775]: E0124 00:30:10.327630 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:30:11.325509 kubelet[2775]: E0124 00:30:11.325470 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:30:14.325204 kubelet[2775]: E0124 00:30:14.325154 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:30:16.327478 kubelet[2775]: E0124 00:30:16.327393 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:30:17.325125 kubelet[2775]: E0124 00:30:17.324975 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:30:20.326940 containerd[1583]: time="2026-01-24T00:30:20.326806663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:30:20.465857 containerd[1583]: time="2026-01-24T00:30:20.465780681Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:30:20.466984 containerd[1583]: time="2026-01-24T00:30:20.466951640Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:30:20.467072 containerd[1583]: time="2026-01-24T00:30:20.467023480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:30:20.467223 kubelet[2775]: E0124 00:30:20.467177 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:30:20.467736 kubelet[2775]: E0124 00:30:20.467223 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:30:20.467736 kubelet[2775]: E0124 00:30:20.467389 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:afba9b21ebd04ac3a3f6ee43ebc36fd5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xq9c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df9447cd6-v4qqr_calico-system(3a284cf5-2a50-4ae3-ac95-941e18d4d850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:20.470397 containerd[1583]: time="2026-01-24T00:30:20.470364890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:30:20.595679 containerd[1583]: time="2026-01-24T00:30:20.595280578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:30:20.596251 containerd[1583]: time="2026-01-24T00:30:20.596180957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:30:20.596251 containerd[1583]: time="2026-01-24T00:30:20.596246157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:30:20.596432 kubelet[2775]: E0124 00:30:20.596395 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:30:20.596568 kubelet[2775]: E0124 00:30:20.596448 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:30:20.597033 kubelet[2775]: E0124 00:30:20.596633 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xq9c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df9447cd6-v4qqr_calico-system(3a284cf5-2a50-4ae3-ac95-941e18d4d850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:20.598323 kubelet[2775]: E0124 00:30:20.598282 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:30:23.326266 containerd[1583]: time="2026-01-24T00:30:23.326196779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:30:23.468048 containerd[1583]: time="2026-01-24T00:30:23.467975652Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:30:23.469314 containerd[1583]: time="2026-01-24T00:30:23.469252791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:30:23.469715 containerd[1583]: time="2026-01-24T00:30:23.469624531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:30:23.470284 kubelet[2775]: E0124 00:30:23.470087 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:30:23.470284 kubelet[2775]: E0124 00:30:23.470136 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:30:23.472044 kubelet[2775]: E0124 00:30:23.470391 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jl6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-547797c6d5-k28bz_calico-system(1fd846d3-1f7b-431f-b783-0e9765cbef2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:23.472207 containerd[1583]: time="2026-01-24T00:30:23.470923272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:30:23.472929 kubelet[2775]: E0124 00:30:23.472888 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:30:23.616621 containerd[1583]: time="2026-01-24T00:30:23.615917245Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:30:23.617218 containerd[1583]: time="2026-01-24T00:30:23.617176695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:30:23.617276 containerd[1583]: time="2026-01-24T00:30:23.617249515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:30:23.617442 kubelet[2775]: E0124 00:30:23.617404 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:23.617524 kubelet[2775]: E0124 00:30:23.617470 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:23.617928 kubelet[2775]: E0124 00:30:23.617823 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95p6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569b7b4d59-8g26m_calico-apiserver(2078a205-dd8a-4be4-86c7-d47960d16fb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:23.619014 kubelet[2775]: E0124 00:30:23.618967 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:30:25.324164 kubelet[2775]: E0124 00:30:25.324128 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:30:28.327538 containerd[1583]: time="2026-01-24T00:30:28.327157537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:30:28.652295 containerd[1583]: time="2026-01-24T00:30:28.651734922Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:30:28.653473 containerd[1583]: time="2026-01-24T00:30:28.653247001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:30:28.653736 containerd[1583]: time="2026-01-24T00:30:28.653319331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:30:28.653848 kubelet[2775]: E0124 00:30:28.653775 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:30:28.653848 kubelet[2775]: E0124 00:30:28.653821 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:30:28.655050 kubelet[2775]: E0124 00:30:28.653935 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5rh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-b2qbk_calico-system(fe31be8c-c3f6-4683-9bda-2dbdf7783a1a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:28.655225 kubelet[2775]: E0124 00:30:28.655103 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:30:29.325754 containerd[1583]: time="2026-01-24T00:30:29.325711300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:30:29.458105 containerd[1583]: time="2026-01-24T00:30:29.458060124Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:30:29.458920 containerd[1583]: time="2026-01-24T00:30:29.458880935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:30:29.458964 containerd[1583]: time="2026-01-24T00:30:29.458951955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:30:29.459139 kubelet[2775]: E0124 00:30:29.459106 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:30:29.459185 kubelet[2775]: E0124 00:30:29.459150 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:30:29.459284 kubelet[2775]: E0124 00:30:29.459244 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7qvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzs24_calico-system(9b58f3e4-7a56-472e-b647-710386b89b86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:29.462128 containerd[1583]: time="2026-01-24T00:30:29.462052744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:30:29.588805 containerd[1583]: time="2026-01-24T00:30:29.588644310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:30:29.589825 containerd[1583]: time="2026-01-24T00:30:29.589792369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:30:29.589919 containerd[1583]: time="2026-01-24T00:30:29.589867459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:30:29.590442 kubelet[2775]: E0124 00:30:29.590375 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:30:29.590504 kubelet[2775]: E0124 00:30:29.590455 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:30:29.590862 kubelet[2775]: E0124 00:30:29.590602 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7qvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzs24_calico-system(9b58f3e4-7a56-472e-b647-710386b89b86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:29.592504 kubelet[2775]: E0124 00:30:29.592467 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:30:30.324527 kubelet[2775]: E0124 00:30:30.324373 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:30:31.325036 containerd[1583]: time="2026-01-24T00:30:31.324995097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:30:31.468003 containerd[1583]: time="2026-01-24T00:30:31.467960393Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:30:31.468761 containerd[1583]: time="2026-01-24T00:30:31.468706854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:30:31.468860 containerd[1583]: time="2026-01-24T00:30:31.468770664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:30:31.468968 kubelet[2775]: E0124 00:30:31.468914 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:31.469494 kubelet[2775]: E0124 00:30:31.468978 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:31.469494 kubelet[2775]: E0124 00:30:31.469123 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6s5cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569b7b4d59-5k5g4_calico-apiserver(cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:31.470559 kubelet[2775]: E0124 00:30:31.470521 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:30:35.323584 kubelet[2775]: E0124 00:30:35.323552 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:30:36.327403 kubelet[2775]: E0124 00:30:36.327361 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:30:36.330343 kubelet[2775]: E0124 00:30:36.328404 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:30:38.327692 kubelet[2775]: E0124 00:30:38.327395 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:30:40.325968 kubelet[2775]: E0124 00:30:40.324798 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:30:40.327104 kubelet[2775]: E0124 00:30:40.326548 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:30:44.326480 kubelet[2775]: E0124 00:30:44.324369 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:30:44.328440 kubelet[2775]: E0124 00:30:44.328388 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:30:45.324397 kubelet[2775]: E0124 00:30:45.324185 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:30:49.324360 kubelet[2775]: E0124 00:30:49.324296 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:30:50.326524 kubelet[2775]: E0124 00:30:50.326098 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:30:51.325394 kubelet[2775]: E0124 00:30:51.325138 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:30:51.326173 kubelet[2775]: E0124 00:30:51.326130 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:30:53.324535 kubelet[2775]: E0124 00:30:53.324184 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:30:57.324539 kubelet[2775]: E0124 00:30:57.324467 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:30:59.324093 kubelet[2775]: E0124 00:30:59.323759 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:30:59.325399 kubelet[2775]: E0124 00:30:59.324767 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:31:01.324852 kubelet[2775]: E0124 00:31:01.324767 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:31:02.326639 kubelet[2775]: E0124 00:31:02.326356 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:31:02.327227 containerd[1583]: time="2026-01-24T00:31:02.326399402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:31:02.454670 containerd[1583]: time="2026-01-24T00:31:02.454517291Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:31:02.455562 containerd[1583]: time="2026-01-24T00:31:02.455488609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:31:02.455562 containerd[1583]: time="2026-01-24T00:31:02.455518139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:31:02.455704 kubelet[2775]: E0124 00:31:02.455642 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:31:02.455704 kubelet[2775]: E0124 00:31:02.455699 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:31:02.455847 kubelet[2775]: E0124 00:31:02.455805 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:afba9b21ebd04ac3a3f6ee43ebc36fd5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xq9c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df9447cd6-v4qqr_calico-system(3a284cf5-2a50-4ae3-ac95-941e18d4d850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:02.458042 containerd[1583]: time="2026-01-24T00:31:02.458014421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:31:02.586007 containerd[1583]: time="2026-01-24T00:31:02.585755772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:31:02.587304 containerd[1583]: time="2026-01-24T00:31:02.587174589Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:31:02.587304 containerd[1583]: time="2026-01-24T00:31:02.587265678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 24 00:31:02.587426 kubelet[2775]: E0124 00:31:02.587396 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:31:02.587479 kubelet[2775]: E0124 00:31:02.587441 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:31:02.587679 kubelet[2775]: E0124 00:31:02.587531 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xq9c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df9447cd6-v4qqr_calico-system(3a284cf5-2a50-4ae3-ac95-941e18d4d850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:02.589228 kubelet[2775]: E0124 00:31:02.589196 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:31:04.327664 containerd[1583]: time="2026-01-24T00:31:04.327537309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:31:04.464542 containerd[1583]: time="2026-01-24T00:31:04.464491591Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:31:04.465815 containerd[1583]: time="2026-01-24T00:31:04.465564018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:31:04.465815 containerd[1583]: time="2026-01-24T00:31:04.465639778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 24 00:31:04.466470 kubelet[2775]: E0124 00:31:04.466102 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:31:04.466470 kubelet[2775]: E0124 00:31:04.466154 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:31:04.466470 kubelet[2775]: E0124 00:31:04.466294 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jl6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-547797c6d5-k28bz_calico-system(1fd846d3-1f7b-431f-b783-0e9765cbef2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:04.467477 kubelet[2775]: E0124 00:31:04.467418 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:31:11.325070 kubelet[2775]: E0124 00:31:11.324753 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:31:14.329581 containerd[1583]: time="2026-01-24T00:31:14.329523622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:31:14.455369 containerd[1583]: time="2026-01-24T00:31:14.455310438Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:31:14.456363 containerd[1583]: time="2026-01-24T00:31:14.456307007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:31:14.456445 containerd[1583]: time="2026-01-24T00:31:14.456403206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:31:14.457757 kubelet[2775]: E0124 00:31:14.456611 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:14.457757 kubelet[2775]: E0124 00:31:14.457707 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:14.458124 kubelet[2775]: E0124 00:31:14.457904 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95p6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569b7b4d59-8g26m_calico-apiserver(2078a205-dd8a-4be4-86c7-d47960d16fb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:14.458597 containerd[1583]: time="2026-01-24T00:31:14.458565932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:31:14.459204 kubelet[2775]: E0124 00:31:14.459139 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:31:14.583110 containerd[1583]: time="2026-01-24T00:31:14.582976060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:31:14.584160 containerd[1583]: time="2026-01-24T00:31:14.584093918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:31:14.584251 containerd[1583]: time="2026-01-24T00:31:14.584214088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 24 00:31:14.584498 kubelet[2775]: E0124 00:31:14.584439 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:31:14.584563 kubelet[2775]: E0124 00:31:14.584505 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:31:14.584739 kubelet[2775]: E0124 00:31:14.584677 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7qvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzs24_calico-system(9b58f3e4-7a56-472e-b647-710386b89b86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:14.587106 containerd[1583]: time="2026-01-24T00:31:14.587055551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:31:14.724297 containerd[1583]: time="2026-01-24T00:31:14.724228084Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:31:14.725210 containerd[1583]: time="2026-01-24T00:31:14.725179961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:31:14.725270 containerd[1583]: time="2026-01-24T00:31:14.725240491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 24 00:31:14.725395 kubelet[2775]: E0124 00:31:14.725350 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:31:14.725444 kubelet[2775]: E0124 00:31:14.725400 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:31:14.725591 kubelet[2775]: E0124 00:31:14.725505 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7qvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzs24_calico-system(9b58f3e4-7a56-472e-b647-710386b89b86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:14.726841 kubelet[2775]: E0124 00:31:14.726792 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:31:15.325333 containerd[1583]: time="2026-01-24T00:31:15.325077209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:31:15.463071 containerd[1583]: time="2026-01-24T00:31:15.463029465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:31:15.464020 containerd[1583]: time="2026-01-24T00:31:15.463971653Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:31:15.464020 containerd[1583]: time="2026-01-24T00:31:15.463997323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 24 00:31:15.464212 kubelet[2775]: E0124 00:31:15.464182 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:31:15.464488 kubelet[2775]: E0124 00:31:15.464228 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:31:15.464488 kubelet[2775]: E0124 00:31:15.464402 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5rh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-b2qbk_calico-system(fe31be8c-c3f6-4683-9bda-2dbdf7783a1a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:15.466515 kubelet[2775]: E0124 00:31:15.466471 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:31:16.326895 kubelet[2775]: E0124 00:31:16.326833 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:31:17.325352 kubelet[2775]: E0124 00:31:17.325308 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:31:17.325917 kubelet[2775]: E0124 00:31:17.325497 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:31:22.331555 containerd[1583]: time="2026-01-24T00:31:22.331401113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:31:22.469117 containerd[1583]: time="2026-01-24T00:31:22.469046836Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:31:22.470146 containerd[1583]: time="2026-01-24T00:31:22.470087614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:31:22.470367 containerd[1583]: time="2026-01-24T00:31:22.470135624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 24 00:31:22.470749 kubelet[2775]: E0124 00:31:22.470683 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:22.470749 kubelet[2775]: E0124 00:31:22.470726 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:22.472182 kubelet[2775]: E0124 00:31:22.472117 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6s5cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569b7b4d59-5k5g4_calico-apiserver(cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:22.473398 kubelet[2775]: E0124 00:31:22.473349 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:31:25.324240 kubelet[2775]: E0124 00:31:25.324194 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:31:26.324326 kubelet[2775]: E0124 00:31:26.324017 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:31:28.328352 kubelet[2775]: E0124 00:31:28.327322 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:31:28.329353 kubelet[2775]: E0124 00:31:28.329313 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:31:31.324764 kubelet[2775]: E0124 00:31:31.324718 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:31:31.325684 kubelet[2775]: E0124 00:31:31.325183 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:31:35.328682 kubelet[2775]: E0124 00:31:35.326241 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:31:36.976323 systemd[1]: Started sshd@7-172.234.200.140:22-68.220.241.50:41614.service - OpenSSH per-connection server daemon (68.220.241.50:41614). Jan 24 00:31:36.985925 kernel: kauditd_printk_skb: 52 callbacks suppressed Jan 24 00:31:36.985995 kernel: audit: type=1130 audit(1769214696.974:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.234.200.140:22-68.220.241.50:41614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:36.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.234.200.140:22-68.220.241.50:41614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:37.157000 audit[5011]: USER_ACCT pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:37.162095 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:37.164175 sshd[5011]: Accepted publickey for core from 68.220.241.50 port 41614 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:31:37.157000 audit[5011]: CRED_ACQ pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:37.167726 kernel: audit: type=1101 audit(1769214697.157:771): pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:37.167779 kernel: audit: type=1103 audit(1769214697.157:772): pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:37.171775 systemd-logind[1571]: New session 9 of user core. Jan 24 00:31:37.157000 audit[5011]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea395bf00 a2=3 a3=0 items=0 ppid=1 pid=5011 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:31:37.180446 kernel: audit: type=1006 audit(1769214697.157:773): pid=5011 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 24 00:31:37.180490 kernel: audit: type=1300 audit(1769214697.157:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea395bf00 a2=3 a3=0 items=0 ppid=1 pid=5011 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:31:37.157000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:31:37.187060 kernel: audit: type=1327 audit(1769214697.157:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:31:37.188134 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:31:37.192000 audit[5011]: USER_START pid=5011 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:37.195000 audit[5015]: CRED_ACQ pid=5015 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:37.204348 kernel: audit: type=1105 audit(1769214697.192:774): pid=5011 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:37.204403 kernel: audit: type=1103 audit(1769214697.195:775): pid=5015 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:37.325625 kubelet[2775]: E0124 00:31:37.325555 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:31:37.330701 sshd[5015]: Connection closed by 68.220.241.50 port 41614 Jan 24 00:31:37.329946 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:37.343720 kernel: audit: type=1106 audit(1769214697.332:776): pid=5011 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:37.332000 audit[5011]: USER_END pid=5011 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:37.338165 systemd[1]: sshd@7-172.234.200.140:22-68.220.241.50:41614.service: Deactivated successfully. Jan 24 00:31:37.342372 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:31:37.353782 kernel: audit: type=1104 audit(1769214697.332:777): pid=5011 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:37.332000 audit[5011]: CRED_DISP pid=5011 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:37.348715 systemd-logind[1571]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:31:37.353428 systemd-logind[1571]: Removed session 9. Jan 24 00:31:37.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.234.200.140:22-68.220.241.50:41614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:41.324455 kubelet[2775]: E0124 00:31:41.324413 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:31:41.325289 kubelet[2775]: E0124 00:31:41.325064 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:31:42.325862 kubelet[2775]: E0124 00:31:42.324715 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:31:42.358401 systemd[1]: Started sshd@8-172.234.200.140:22-68.220.241.50:41616.service - OpenSSH per-connection server daemon (68.220.241.50:41616). Jan 24 00:31:42.367986 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:31:42.368026 kernel: audit: type=1130 audit(1769214702.356:779): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.234.200.140:22-68.220.241.50:41616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:42.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.234.200.140:22-68.220.241.50:41616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:42.508000 audit[5028]: USER_ACCT pid=5028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:42.512987 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:42.516079 sshd[5028]: Accepted publickey for core from 68.220.241.50 port 41616 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:31:42.518685 kernel: audit: type=1101 audit(1769214702.508:780): pid=5028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:42.509000 audit[5028]: CRED_ACQ pid=5028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:42.527878 kernel: audit: type=1103 audit(1769214702.509:781): pid=5028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:42.527988 kernel: audit: type=1006 audit(1769214702.509:782): pid=5028 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 24 00:31:42.531873 kernel: audit: type=1300 audit(1769214702.509:782): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff01225b0 a2=3 a3=0 items=0 ppid=1 pid=5028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:31:42.509000 audit[5028]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff01225b0 a2=3 a3=0 items=0 ppid=1 pid=5028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:31:42.529292 systemd-logind[1571]: New session 10 of user core. Jan 24 00:31:42.509000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:31:42.540057 kernel: audit: type=1327 audit(1769214702.509:782): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:31:42.540799 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:31:42.549000 audit[5028]: USER_START pid=5028 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:42.560762 kernel: audit: type=1105 audit(1769214702.549:783): pid=5028 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:42.560000 audit[5032]: CRED_ACQ pid=5032 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:42.569686 kernel: audit: type=1103 audit(1769214702.560:784): pid=5032 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:42.683755 sshd[5032]: Connection closed by 68.220.241.50 port 41616 Jan 24 00:31:42.685118 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:42.685000 audit[5028]: USER_END pid=5028 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:42.698701 kernel: audit: type=1106 audit(1769214702.685:785): pid=5028 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:42.700116 systemd[1]: sshd@8-172.234.200.140:22-68.220.241.50:41616.service: Deactivated successfully. Jan 24 00:31:42.703035 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:31:42.707869 systemd-logind[1571]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:31:42.710183 systemd-logind[1571]: Removed session 10. Jan 24 00:31:42.685000 audit[5028]: CRED_DISP pid=5028 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:42.717712 kernel: audit: type=1104 audit(1769214702.685:786): pid=5028 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:42.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.234.200.140:22-68.220.241.50:41616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:45.326053 kubelet[2775]: E0124 00:31:45.325958 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:31:47.713053 systemd[1]: Started sshd@9-172.234.200.140:22-68.220.241.50:37058.service - OpenSSH per-connection server daemon (68.220.241.50:37058). Jan 24 00:31:47.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.234.200.140:22-68.220.241.50:37058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:47.715593 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:31:47.715666 kernel: audit: type=1130 audit(1769214707.711:788): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.234.200.140:22-68.220.241.50:37058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:47.872000 audit[5046]: USER_ACCT pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:47.881461 sshd[5046]: Accepted publickey for core from 68.220.241.50 port 37058 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:31:47.881861 kernel: audit: type=1101 audit(1769214707.872:789): pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:47.884477 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:47.881000 audit[5046]: CRED_ACQ pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:47.892672 kernel: audit: type=1103 audit(1769214707.881:790): pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:47.881000 audit[5046]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeda7bc6a0 a2=3 a3=0 items=0 ppid=1 pid=5046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:31:47.901848 systemd-logind[1571]: New session 11 of user core. Jan 24 00:31:47.924063 kernel: audit: type=1006 audit(1769214707.881:791): pid=5046 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 24 00:31:47.924140 kernel: audit: type=1300 audit(1769214707.881:791): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeda7bc6a0 a2=3 a3=0 items=0 ppid=1 pid=5046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:31:47.881000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:31:47.932011 kernel: audit: type=1327 audit(1769214707.881:791): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:31:47.934944 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:31:47.940000 audit[5046]: USER_START pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:47.950685 kernel: audit: type=1105 audit(1769214707.940:792): pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:47.943000 audit[5050]: CRED_ACQ pid=5050 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:47.957693 kernel: audit: type=1103 audit(1769214707.943:793): pid=5050 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.087499 sshd[5050]: Connection closed by 68.220.241.50 port 37058 Jan 24 00:31:48.088935 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:48.091000 audit[5046]: USER_END pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.100867 systemd[1]: sshd@9-172.234.200.140:22-68.220.241.50:37058.service: Deactivated successfully. Jan 24 00:31:48.102707 kernel: audit: type=1106 audit(1769214708.091:794): pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.112440 kernel: audit: type=1104 audit(1769214708.091:795): pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.091000 audit[5046]: CRED_DISP pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.106295 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:31:48.110572 systemd-logind[1571]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:31:48.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.234.200.140:22-68.220.241.50:37058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:48.127599 systemd[1]: Started sshd@10-172.234.200.140:22-68.220.241.50:37068.service - OpenSSH per-connection server daemon (68.220.241.50:37068). Jan 24 00:31:48.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.234.200.140:22-68.220.241.50:37068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:48.130541 systemd-logind[1571]: Removed session 11. Jan 24 00:31:48.293000 audit[5063]: USER_ACCT pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.295564 sshd[5063]: Accepted publickey for core from 68.220.241.50 port 37068 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:31:48.295000 audit[5063]: CRED_ACQ pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.295000 audit[5063]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcffb82110 a2=3 a3=0 items=0 ppid=1 pid=5063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:31:48.295000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:31:48.298778 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:48.306590 systemd-logind[1571]: New session 12 of user core. Jan 24 00:31:48.312925 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:31:48.315000 audit[5063]: USER_START pid=5063 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.318000 audit[5067]: CRED_ACQ pid=5067 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.477955 sshd[5067]: Connection closed by 68.220.241.50 port 37068 Jan 24 00:31:48.479906 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:48.481000 audit[5063]: USER_END pid=5063 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.482000 audit[5063]: CRED_DISP pid=5063 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.487505 systemd[1]: sshd@10-172.234.200.140:22-68.220.241.50:37068.service: Deactivated successfully. Jan 24 00:31:48.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.234.200.140:22-68.220.241.50:37068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:48.493616 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:31:48.496282 systemd-logind[1571]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:31:48.499851 systemd-logind[1571]: Removed session 12. Jan 24 00:31:48.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.234.200.140:22-68.220.241.50:37072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:48.514960 systemd[1]: Started sshd@11-172.234.200.140:22-68.220.241.50:37072.service - OpenSSH per-connection server daemon (68.220.241.50:37072). Jan 24 00:31:48.679000 audit[5077]: USER_ACCT pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.680927 sshd[5077]: Accepted publickey for core from 68.220.241.50 port 37072 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:31:48.680000 audit[5077]: CRED_ACQ pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.680000 audit[5077]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeed919b00 a2=3 a3=0 items=0 ppid=1 pid=5077 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:31:48.680000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:31:48.683703 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:48.691237 systemd-logind[1571]: New session 13 of user core. Jan 24 00:31:48.694830 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:31:48.701000 audit[5077]: USER_START pid=5077 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.703000 audit[5081]: CRED_ACQ pid=5081 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.824801 sshd[5081]: Connection closed by 68.220.241.50 port 37072 Jan 24 00:31:48.825460 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:48.827000 audit[5077]: USER_END pid=5077 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.827000 audit[5077]: CRED_DISP pid=5077 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:48.833088 systemd-logind[1571]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:31:48.833445 systemd[1]: sshd@11-172.234.200.140:22-68.220.241.50:37072.service: Deactivated successfully. Jan 24 00:31:48.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.234.200.140:22-68.220.241.50:37072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:48.837495 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:31:48.840804 systemd-logind[1571]: Removed session 13. Jan 24 00:31:49.325646 kubelet[2775]: E0124 00:31:49.325591 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:31:50.326707 kubelet[2775]: E0124 00:31:50.326018 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:31:50.326707 kubelet[2775]: E0124 00:31:50.326136 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:31:50.327514 kubelet[2775]: E0124 00:31:50.327267 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:31:50.327619 kubelet[2775]: E0124 00:31:50.327597 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:31:53.324575 kubelet[2775]: E0124 00:31:53.324285 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:31:53.859231 systemd[1]: Started sshd@12-172.234.200.140:22-68.220.241.50:38730.service - OpenSSH per-connection server daemon (68.220.241.50:38730). Jan 24 00:31:53.868172 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 24 00:31:53.868221 kernel: audit: type=1130 audit(1769214713.857:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.234.200.140:22-68.220.241.50:38730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:53.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.234.200.140:22-68.220.241.50:38730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:54.045517 kernel: audit: type=1101 audit(1769214714.033:816): pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:54.033000 audit[5097]: USER_ACCT pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:54.045763 sshd[5097]: Accepted publickey for core from 68.220.241.50 port 38730 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:31:54.038826 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:54.045309 systemd-logind[1571]: New session 14 of user core. Jan 24 00:31:54.034000 audit[5097]: CRED_ACQ pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:54.058629 kernel: audit: type=1103 audit(1769214714.034:817): pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:54.058719 kernel: audit: type=1006 audit(1769214714.034:818): pid=5097 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 24 00:31:54.034000 audit[5097]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd102718d0 a2=3 a3=0 items=0 ppid=1 pid=5097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:31:54.064372 kernel: audit: type=1300 audit(1769214714.034:818): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd102718d0 a2=3 a3=0 items=0 ppid=1 pid=5097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:31:54.071688 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:31:54.034000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:31:54.080681 kernel: audit: type=1327 audit(1769214714.034:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:31:54.080000 audit[5097]: USER_START pid=5097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:54.093685 kernel: audit: type=1105 audit(1769214714.080:819): pid=5097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:54.092000 audit[5101]: CRED_ACQ pid=5101 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:54.101675 kernel: audit: type=1103 audit(1769214714.092:820): pid=5101 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:54.205747 sshd[5101]: Connection closed by 68.220.241.50 port 38730 Jan 24 00:31:54.206292 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:54.207000 audit[5097]: USER_END pid=5097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:54.220676 kernel: audit: type=1106 audit(1769214714.207:821): pid=5097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:54.222481 systemd[1]: sshd@12-172.234.200.140:22-68.220.241.50:38730.service: Deactivated successfully. Jan 24 00:31:54.207000 audit[5097]: CRED_DISP pid=5097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:54.225924 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:31:54.229085 systemd-logind[1571]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:31:54.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.234.200.140:22-68.220.241.50:38730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:54.231286 systemd-logind[1571]: Removed session 14. Jan 24 00:31:54.233763 kernel: audit: type=1104 audit(1769214714.207:822): pid=5097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:54.326981 kubelet[2775]: E0124 00:31:54.326901 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:31:55.324673 kubelet[2775]: E0124 00:31:55.324433 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:31:56.326443 kubelet[2775]: E0124 00:31:56.326278 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:31:57.323684 kubelet[2775]: E0124 00:31:57.323603 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:31:59.238960 systemd[1]: Started sshd@13-172.234.200.140:22-68.220.241.50:38732.service - OpenSSH per-connection server daemon (68.220.241.50:38732). Jan 24 00:31:59.241251 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:31:59.241605 kernel: audit: type=1130 audit(1769214719.237:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.234.200.140:22-68.220.241.50:38732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:59.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.234.200.140:22-68.220.241.50:38732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:59.404000 audit[5139]: USER_ACCT pid=5139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:59.407773 sshd[5139]: Accepted publickey for core from 68.220.241.50 port 38732 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:31:59.410953 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:59.414690 kernel: audit: type=1101 audit(1769214719.404:825): pid=5139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:59.407000 audit[5139]: CRED_ACQ pid=5139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:59.423249 kernel: audit: type=1103 audit(1769214719.407:826): pid=5139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:59.423298 kernel: audit: type=1006 audit(1769214719.407:827): pid=5139 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 24 00:31:59.407000 audit[5139]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed19ecd10 a2=3 a3=0 items=0 ppid=1 pid=5139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:31:59.426675 systemd-logind[1571]: New session 15 of user core. Jan 24 00:31:59.428549 kernel: audit: type=1300 audit(1769214719.407:827): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed19ecd10 a2=3 a3=0 items=0 ppid=1 pid=5139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:31:59.407000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:31:59.437754 kernel: audit: type=1327 audit(1769214719.407:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:31:59.438863 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:31:59.443000 audit[5139]: USER_START pid=5139 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:59.453000 audit[5143]: CRED_ACQ pid=5143 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:59.456994 kernel: audit: type=1105 audit(1769214719.443:828): pid=5139 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:59.457048 kernel: audit: type=1103 audit(1769214719.453:829): pid=5143 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:59.569193 sshd[5143]: Connection closed by 68.220.241.50 port 38732 Jan 24 00:31:59.569842 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:59.569000 audit[5139]: USER_END pid=5139 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:59.578200 systemd[1]: sshd@13-172.234.200.140:22-68.220.241.50:38732.service: Deactivated successfully. Jan 24 00:31:59.580617 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:31:59.581671 kernel: audit: type=1106 audit(1769214719.569:830): pid=5139 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:59.570000 audit[5139]: CRED_DISP pid=5139 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:31:59.584156 systemd-logind[1571]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:31:59.585479 systemd-logind[1571]: Removed session 15. Jan 24 00:31:59.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.234.200.140:22-68.220.241.50:38732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:31:59.589725 kernel: audit: type=1104 audit(1769214719.570:831): pid=5139 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:03.324500 kubelet[2775]: E0124 00:32:03.324422 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:32:04.327671 kubelet[2775]: E0124 00:32:04.326864 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:32:04.327671 kubelet[2775]: E0124 00:32:04.327164 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:32:04.327671 kubelet[2775]: E0124 00:32:04.327270 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:32:04.607802 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:32:04.607892 kernel: audit: type=1130 audit(1769214724.601:833): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.234.200.140:22-68.220.241.50:53184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:04.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.234.200.140:22-68.220.241.50:53184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:04.602799 systemd[1]: Started sshd@14-172.234.200.140:22-68.220.241.50:53184.service - OpenSSH per-connection server daemon (68.220.241.50:53184). Jan 24 00:32:04.751000 audit[5155]: USER_ACCT pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:04.755939 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:04.757983 sshd[5155]: Accepted publickey for core from 68.220.241.50 port 53184 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:32:04.761701 kernel: audit: type=1101 audit(1769214724.751:834): pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:04.752000 audit[5155]: CRED_ACQ pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:04.770300 kernel: audit: type=1103 audit(1769214724.752:835): pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:04.770365 kernel: audit: type=1006 audit(1769214724.752:836): pid=5155 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 24 00:32:04.752000 audit[5155]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff26a8e680 a2=3 a3=0 items=0 ppid=1 pid=5155 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:04.773839 systemd-logind[1571]: New session 16 of user core. Jan 24 00:32:04.776164 kernel: audit: type=1300 audit(1769214724.752:836): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff26a8e680 a2=3 a3=0 items=0 ppid=1 pid=5155 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:04.752000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:32:04.781847 kernel: audit: type=1327 audit(1769214724.752:836): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:32:04.782918 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:32:04.786000 audit[5155]: USER_START pid=5155 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:04.798681 kernel: audit: type=1105 audit(1769214724.786:837): pid=5155 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:04.801000 audit[5159]: CRED_ACQ pid=5159 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:04.810682 kernel: audit: type=1103 audit(1769214724.801:838): pid=5159 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:04.913100 sshd[5159]: Connection closed by 68.220.241.50 port 53184 Jan 24 00:32:04.913708 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:04.915000 audit[5155]: USER_END pid=5155 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:04.926680 kernel: audit: type=1106 audit(1769214724.915:839): pid=5155 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:04.924000 audit[5155]: CRED_DISP pid=5155 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:04.929793 systemd[1]: sshd@14-172.234.200.140:22-68.220.241.50:53184.service: Deactivated successfully. Jan 24 00:32:04.935621 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:32:04.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.234.200.140:22-68.220.241.50:53184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:04.936680 kernel: audit: type=1104 audit(1769214724.924:840): pid=5155 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:04.940054 systemd-logind[1571]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:32:04.954985 systemd[1]: Started sshd@15-172.234.200.140:22-68.220.241.50:53194.service - OpenSSH per-connection server daemon (68.220.241.50:53194). Jan 24 00:32:04.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.234.200.140:22-68.220.241.50:53194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:04.956850 systemd-logind[1571]: Removed session 16. Jan 24 00:32:05.113000 audit[5170]: USER_ACCT pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:05.115480 sshd[5170]: Accepted publickey for core from 68.220.241.50 port 53194 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:32:05.114000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:05.114000 audit[5170]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc66ce2d90 a2=3 a3=0 items=0 ppid=1 pid=5170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:05.114000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:32:05.117252 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:05.124323 systemd-logind[1571]: New session 17 of user core. Jan 24 00:32:05.131866 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:32:05.134000 audit[5170]: USER_START pid=5170 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:05.136000 audit[5174]: CRED_ACQ pid=5174 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:05.397783 sshd[5174]: Connection closed by 68.220.241.50 port 53194 Jan 24 00:32:05.400851 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:05.400000 audit[5170]: USER_END pid=5170 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:05.401000 audit[5170]: CRED_DISP pid=5170 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:05.406871 systemd[1]: sshd@15-172.234.200.140:22-68.220.241.50:53194.service: Deactivated successfully. Jan 24 00:32:05.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.234.200.140:22-68.220.241.50:53194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:05.412221 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:32:05.414343 systemd-logind[1571]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:32:05.418999 systemd-logind[1571]: Removed session 17. Jan 24 00:32:05.433200 systemd[1]: Started sshd@16-172.234.200.140:22-68.220.241.50:53198.service - OpenSSH per-connection server daemon (68.220.241.50:53198). Jan 24 00:32:05.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.234.200.140:22-68.220.241.50:53198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:05.589000 audit[5184]: USER_ACCT pid=5184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:05.591276 sshd[5184]: Accepted publickey for core from 68.220.241.50 port 53198 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:32:05.590000 audit[5184]: CRED_ACQ pid=5184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:05.590000 audit[5184]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6e8073f0 a2=3 a3=0 items=0 ppid=1 pid=5184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:05.590000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:32:05.593876 sshd-session[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:05.606385 systemd-logind[1571]: New session 18 of user core. Jan 24 00:32:05.614839 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:32:05.619000 audit[5184]: USER_START pid=5184 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:05.622000 audit[5188]: CRED_ACQ pid=5188 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:06.240000 audit[5199]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=5199 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:32:06.240000 audit[5199]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc632b4360 a2=0 a3=7ffc632b434c items=0 ppid=2879 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:06.240000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:32:06.245000 audit[5199]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5199 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:32:06.264678 sshd[5188]: Connection closed by 68.220.241.50 port 53198 Jan 24 00:32:06.265458 sshd-session[5184]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:06.245000 audit[5199]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc632b4360 a2=0 a3=7ffc632b434c items=0 ppid=2879 pid=5199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:06.245000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:32:06.268000 audit[5184]: USER_END pid=5184 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:06.269000 audit[5184]: CRED_DISP pid=5184 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:06.273261 systemd-logind[1571]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:32:06.278053 systemd[1]: sshd@16-172.234.200.140:22-68.220.241.50:53198.service: Deactivated successfully. Jan 24 00:32:06.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.234.200.140:22-68.220.241.50:53198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:06.281071 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:32:06.286412 systemd-logind[1571]: Removed session 18. Jan 24 00:32:06.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.234.200.140:22-68.220.241.50:53210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:06.304715 systemd[1]: Started sshd@17-172.234.200.140:22-68.220.241.50:53210.service - OpenSSH per-connection server daemon (68.220.241.50:53210). Jan 24 00:32:06.318000 audit[5205]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:32:06.318000 audit[5205]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe36cfcf60 a2=0 a3=7ffe36cfcf4c items=0 ppid=2879 pid=5205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:06.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:32:06.322000 audit[5205]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:32:06.322000 audit[5205]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe36cfcf60 a2=0 a3=0 items=0 ppid=2879 pid=5205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:06.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:32:06.523000 audit[5206]: USER_ACCT pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:06.526759 sshd[5206]: Accepted publickey for core from 68.220.241.50 port 53210 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:32:06.526000 audit[5206]: CRED_ACQ pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:06.526000 audit[5206]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd55c39f90 a2=3 a3=0 items=0 ppid=1 pid=5206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:06.526000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:32:06.529375 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:06.540174 systemd-logind[1571]: New session 19 of user core. Jan 24 00:32:06.543811 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:32:06.548000 audit[5206]: USER_START pid=5206 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:06.552000 audit[5212]: CRED_ACQ pid=5212 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:06.844484 sshd[5212]: Connection closed by 68.220.241.50 port 53210 Jan 24 00:32:06.846083 sshd-session[5206]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:06.848000 audit[5206]: USER_END pid=5206 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:06.848000 audit[5206]: CRED_DISP pid=5206 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:06.853856 systemd-logind[1571]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:32:06.855191 systemd[1]: sshd@17-172.234.200.140:22-68.220.241.50:53210.service: Deactivated successfully. Jan 24 00:32:06.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.234.200.140:22-68.220.241.50:53210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:06.859112 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:32:06.863863 systemd-logind[1571]: Removed session 19. Jan 24 00:32:06.875933 systemd[1]: Started sshd@18-172.234.200.140:22-68.220.241.50:53216.service - OpenSSH per-connection server daemon (68.220.241.50:53216). Jan 24 00:32:06.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.234.200.140:22-68.220.241.50:53216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:07.029000 audit[5222]: USER_ACCT pid=5222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:07.034380 sshd[5222]: Accepted publickey for core from 68.220.241.50 port 53216 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:32:07.035000 audit[5222]: CRED_ACQ pid=5222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:07.035000 audit[5222]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc13943b00 a2=3 a3=0 items=0 ppid=1 pid=5222 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:07.035000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:32:07.039530 sshd-session[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:07.053715 systemd-logind[1571]: New session 20 of user core. Jan 24 00:32:07.057804 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:32:07.060000 audit[5222]: USER_START pid=5222 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:07.065000 audit[5226]: CRED_ACQ pid=5226 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:07.193303 sshd[5226]: Connection closed by 68.220.241.50 port 53216 Jan 24 00:32:07.194506 sshd-session[5222]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:07.194000 audit[5222]: USER_END pid=5222 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:07.195000 audit[5222]: CRED_DISP pid=5222 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:07.200755 systemd[1]: sshd@18-172.234.200.140:22-68.220.241.50:53216.service: Deactivated successfully. Jan 24 00:32:07.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.234.200.140:22-68.220.241.50:53216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:07.203840 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:32:07.205480 systemd-logind[1571]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:32:07.208616 systemd-logind[1571]: Removed session 20. Jan 24 00:32:08.124744 systemd[1]: Started sshd@19-172.234.200.140:22-198.235.24.212:50226.service - OpenSSH per-connection server daemon (198.235.24.212:50226). Jan 24 00:32:08.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.234.200.140:22-198.235.24.212:50226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:08.263916 sshd[5237]: Connection closed by 198.235.24.212 port 50226 Jan 24 00:32:08.265268 systemd[1]: sshd@19-172.234.200.140:22-198.235.24.212:50226.service: Deactivated successfully. Jan 24 00:32:08.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.234.200.140:22-198.235.24.212:50226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:08.332952 kubelet[2775]: E0124 00:32:08.331988 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:32:08.333812 kubelet[2775]: E0124 00:32:08.333786 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:32:11.325474 kubelet[2775]: E0124 00:32:11.325394 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-df9447cd6-v4qqr" podUID="3a284cf5-2a50-4ae3-ac95-941e18d4d850" Jan 24 00:32:11.579000 audit[5242]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:32:11.581890 kernel: kauditd_printk_skb: 59 callbacks suppressed Jan 24 00:32:11.581986 kernel: audit: type=1325 audit(1769214731.579:884): table=filter:148 family=2 entries=26 op=nft_register_rule pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:32:11.579000 audit[5242]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9963d760 a2=0 a3=7ffd9963d74c items=0 ppid=2879 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:11.598703 kernel: audit: type=1300 audit(1769214731.579:884): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9963d760 a2=0 a3=7ffd9963d74c items=0 ppid=2879 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:11.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:32:11.610797 kernel: audit: type=1327 audit(1769214731.579:884): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:32:11.610868 kernel: audit: type=1325 audit(1769214731.599:885): table=nat:149 family=2 entries=104 op=nft_register_chain pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:32:11.599000 audit[5242]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 24 00:32:11.619622 kernel: audit: type=1300 audit(1769214731.599:885): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd9963d760 a2=0 a3=7ffd9963d74c items=0 ppid=2879 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:11.599000 audit[5242]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd9963d760 a2=0 a3=7ffd9963d74c items=0 ppid=2879 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:11.599000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:32:11.624685 kernel: audit: type=1327 audit(1769214731.599:885): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 24 00:32:12.228492 systemd[1]: Started sshd@20-172.234.200.140:22-68.220.241.50:53224.service - OpenSSH per-connection server daemon (68.220.241.50:53224). Jan 24 00:32:12.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.234.200.140:22-68.220.241.50:53224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:12.235809 kernel: audit: type=1130 audit(1769214732.227:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.234.200.140:22-68.220.241.50:53224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:12.396000 audit[5244]: USER_ACCT pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:12.409953 sshd[5244]: Accepted publickey for core from 68.220.241.50 port 53224 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:32:12.413166 sshd-session[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:12.427445 systemd-logind[1571]: New session 21 of user core. Jan 24 00:32:12.430751 kernel: audit: type=1101 audit(1769214732.396:887): pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:12.409000 audit[5244]: CRED_ACQ pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:12.439237 kernel: audit: type=1103 audit(1769214732.409:888): pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:12.439288 kernel: audit: type=1006 audit(1769214732.410:889): pid=5244 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 24 00:32:12.439812 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:32:12.410000 audit[5244]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc77c76c50 a2=3 a3=0 items=0 ppid=1 pid=5244 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:12.410000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:32:12.443000 audit[5244]: USER_START pid=5244 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:12.446000 audit[5248]: CRED_ACQ pid=5248 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:12.554895 sshd[5248]: Connection closed by 68.220.241.50 port 53224 Jan 24 00:32:12.556335 sshd-session[5244]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:12.557000 audit[5244]: USER_END pid=5244 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:12.557000 audit[5244]: CRED_DISP pid=5244 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:12.562216 systemd-logind[1571]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:32:12.562413 systemd[1]: sshd@20-172.234.200.140:22-68.220.241.50:53224.service: Deactivated successfully. Jan 24 00:32:12.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.234.200.140:22-68.220.241.50:53224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:12.564622 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:32:12.566477 systemd-logind[1571]: Removed session 21. Jan 24 00:32:15.325598 kubelet[2775]: E0124 00:32:15.325547 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-8g26m" podUID="2078a205-dd8a-4be4-86c7-d47960d16fb3" Jan 24 00:32:16.327758 kubelet[2775]: E0124 00:32:16.326777 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b2qbk" podUID="fe31be8c-c3f6-4683-9bda-2dbdf7783a1a" Jan 24 00:32:17.597123 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 24 00:32:17.597215 kernel: audit: type=1130 audit(1769214737.586:895): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.234.200.140:22-68.220.241.50:44558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:17.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.234.200.140:22-68.220.241.50:44558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:17.587988 systemd[1]: Started sshd@21-172.234.200.140:22-68.220.241.50:44558.service - OpenSSH per-connection server daemon (68.220.241.50:44558). Jan 24 00:32:17.750000 audit[5263]: USER_ACCT pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:17.756438 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:17.760292 sshd[5263]: Accepted publickey for core from 68.220.241.50 port 44558 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:32:17.767819 kernel: audit: type=1101 audit(1769214737.750:896): pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:17.767875 kernel: audit: type=1103 audit(1769214737.753:897): pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:17.753000 audit[5263]: CRED_ACQ pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:17.772725 kernel: audit: type=1006 audit(1769214737.753:898): pid=5263 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 24 00:32:17.778128 systemd-logind[1571]: New session 22 of user core. Jan 24 00:32:17.753000 audit[5263]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda4e89230 a2=3 a3=0 items=0 ppid=1 pid=5263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:17.795704 kernel: audit: type=1300 audit(1769214737.753:898): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda4e89230 a2=3 a3=0 items=0 ppid=1 pid=5263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:17.753000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:32:17.800426 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:32:17.800718 kernel: audit: type=1327 audit(1769214737.753:898): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:32:17.807000 audit[5263]: USER_START pid=5263 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:17.821682 kernel: audit: type=1105 audit(1769214737.807:899): pid=5263 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:17.821803 kernel: audit: type=1103 audit(1769214737.808:900): pid=5267 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:17.808000 audit[5267]: CRED_ACQ pid=5267 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:17.932903 sshd[5267]: Connection closed by 68.220.241.50 port 44558 Jan 24 00:32:17.934002 sshd-session[5263]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:17.934000 audit[5263]: USER_END pid=5263 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:17.946702 kernel: audit: type=1106 audit(1769214737.934:901): pid=5263 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:17.949456 systemd[1]: sshd@21-172.234.200.140:22-68.220.241.50:44558.service: Deactivated successfully. Jan 24 00:32:17.952511 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:32:17.945000 audit[5263]: CRED_DISP pid=5263 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:17.954063 systemd-logind[1571]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:32:17.958024 systemd-logind[1571]: Removed session 22. Jan 24 00:32:17.962759 kernel: audit: type=1104 audit(1769214737.945:902): pid=5263 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:17.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.234.200.140:22-68.220.241.50:44558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:19.324688 kubelet[2775]: E0124 00:32:19.324494 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569b7b4d59-5k5g4" podUID="cc3d3be5-6ea1-456f-b2a1-4ebadd452d7b" Jan 24 00:32:21.325819 kubelet[2775]: E0124 00:32:21.325776 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzs24" podUID="9b58f3e4-7a56-472e-b647-710386b89b86" Jan 24 00:32:22.325507 kubelet[2775]: E0124 00:32:22.325427 2775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-547797c6d5-k28bz" podUID="1fd846d3-1f7b-431f-b783-0e9765cbef2a" Jan 24 00:32:22.968945 systemd[1]: Started sshd@22-172.234.200.140:22-68.220.241.50:56016.service - OpenSSH per-connection server daemon (68.220.241.50:56016). Jan 24 00:32:22.977677 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 24 00:32:22.977752 kernel: audit: type=1130 audit(1769214742.967:904): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.234.200.140:22-68.220.241.50:56016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:22.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.234.200.140:22-68.220.241.50:56016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:23.138000 audit[5279]: USER_ACCT pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:23.147263 sshd[5279]: Accepted publickey for core from 68.220.241.50 port 56016 ssh2: RSA SHA256:IRjesLrG0hOviko3cFhvj9lmkvOjb9NLdxgdQ3AAleE Jan 24 00:32:23.148110 kernel: audit: type=1101 audit(1769214743.138:905): pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:23.150768 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:23.148000 audit[5279]: CRED_ACQ pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:23.160274 systemd-logind[1571]: New session 23 of user core. Jan 24 00:32:23.167625 kernel: audit: type=1103 audit(1769214743.148:906): pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:23.167709 kernel: audit: type=1006 audit(1769214743.148:907): pid=5279 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 24 00:32:23.148000 audit[5279]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc958ebe70 a2=3 a3=0 items=0 ppid=1 pid=5279 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:23.172861 kernel: audit: type=1300 audit(1769214743.148:907): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc958ebe70 a2=3 a3=0 items=0 ppid=1 pid=5279 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 24 00:32:23.174126 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:32:23.148000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:32:23.179858 kernel: audit: type=1327 audit(1769214743.148:907): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 24 00:32:23.179000 audit[5279]: USER_START pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:23.184000 audit[5283]: CRED_ACQ pid=5283 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:23.193929 kernel: audit: type=1105 audit(1769214743.179:908): pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:23.193998 kernel: audit: type=1103 audit(1769214743.184:909): pid=5283 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:23.295642 sshd[5283]: Connection closed by 68.220.241.50 port 56016 Jan 24 00:32:23.296513 sshd-session[5279]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:23.299000 audit[5279]: USER_END pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:23.304228 systemd-logind[1571]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:32:23.304960 systemd[1]: sshd@22-172.234.200.140:22-68.220.241.50:56016.service: Deactivated successfully. Jan 24 00:32:23.308605 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:32:23.309697 kernel: audit: type=1106 audit(1769214743.299:910): pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:23.299000 audit[5279]: CRED_DISP pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:23.312949 systemd-logind[1571]: Removed session 23. Jan 24 00:32:23.316911 kernel: audit: type=1104 audit(1769214743.299:911): pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 24 00:32:23.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.234.200.140:22-68.220.241.50:56016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 24 00:32:23.323403 kubelet[2775]: E0124 00:32:23.323369 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 24 00:32:25.325537 containerd[1583]: time="2026-01-24T00:32:25.325251617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:32:25.483964 containerd[1583]: time="2026-01-24T00:32:25.483781964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 24 00:32:25.484935 containerd[1583]: time="2026-01-24T00:32:25.484898124Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:32:25.485117 containerd[1583]: time="2026-01-24T00:32:25.484996594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 24 00:32:25.485399 kubelet[2775]: E0124 00:32:25.485368 2775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:32:25.486888 kubelet[2775]: E0124 00:32:25.486701 2775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:32:25.486888 kubelet[2775]: E0124 00:32:25.486820 2775 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:afba9b21ebd04ac3a3f6ee43ebc36fd5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xq9c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-df9447cd6-v4qqr_calico-system(3a284cf5-2a50-4ae3-ac95-941e18d4d850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:25.488748 containerd[1583]: time="2026-01-24T00:32:25.488701730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\""