Jan 14 01:42:22.183435 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:26:24 -00 2026 Jan 14 01:42:22.183459 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:42:22.183468 kernel: BIOS-provided physical RAM map: Jan 14 01:42:22.183475 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 14 01:42:22.183481 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 14 01:42:22.183488 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 01:42:22.183497 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 14 01:42:22.183504 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 14 01:42:22.183510 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 14 01:42:22.183517 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 14 01:42:22.183523 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 01:42:22.183530 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 01:42:22.183536 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 14 01:42:22.183543 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 01:42:22.183553 kernel: NX (Execute Disable) protection: active Jan 14 01:42:22.183560 kernel: APIC: Static calls initialized Jan 14 01:42:22.183567 kernel: SMBIOS 2.8 present. Jan 14 01:42:22.183574 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 14 01:42:22.183581 kernel: DMI: Memory slots populated: 1/1 Jan 14 01:42:22.183590 kernel: Hypervisor detected: KVM Jan 14 01:42:22.183597 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 14 01:42:22.183604 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 01:42:22.183610 kernel: kvm-clock: using sched offset of 6201589314 cycles Jan 14 01:42:22.183618 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 01:42:22.183625 kernel: tsc: Detected 2000.000 MHz processor Jan 14 01:42:22.183633 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 01:42:22.183640 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 01:42:22.183648 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 14 01:42:22.183657 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 01:42:22.183665 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 01:42:22.183672 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 14 01:42:22.183679 kernel: Using GB pages for direct mapping Jan 14 01:42:22.183686 kernel: ACPI: Early table checksum verification disabled Jan 14 01:42:22.183693 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 14 01:42:22.183701 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:42:22.183710 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:42:22.183717 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:42:22.183725 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 14 01:42:22.183732 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:42:22.183739 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:42:22.183750 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:42:22.183760 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:42:22.183768 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 14 01:42:22.183775 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 14 01:42:22.183783 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 14 01:42:22.183790 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 14 01:42:22.183800 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 14 01:42:22.183807 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 14 01:42:22.183815 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 14 01:42:22.183823 kernel: No NUMA configuration found Jan 14 01:42:22.183830 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 14 01:42:22.183838 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jan 14 01:42:22.183845 kernel: Zone ranges: Jan 14 01:42:22.183853 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 01:42:22.183862 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 01:42:22.183870 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 14 01:42:22.183877 kernel: Device empty Jan 14 01:42:22.183885 kernel: Movable zone start for each node Jan 14 01:42:22.183892 kernel: Early memory node ranges Jan 14 01:42:22.183900 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 01:42:22.183907 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 14 01:42:22.183917 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 14 01:42:22.183924 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 14 01:42:22.183932 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:42:22.183940 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 01:42:22.183947 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 14 01:42:22.183955 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 01:42:22.183962 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 01:42:22.183970 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 01:42:22.183980 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 01:42:22.183987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 01:42:22.183995 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 01:42:22.184002 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 01:42:22.184010 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 01:42:22.184018 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 01:42:22.184025 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 01:42:22.184036 kernel: TSC deadline timer available Jan 14 01:42:22.184043 kernel: CPU topo: Max. logical packages: 1 Jan 14 01:42:22.184051 kernel: CPU topo: Max. logical dies: 1 Jan 14 01:42:22.184058 kernel: CPU topo: Max. dies per package: 1 Jan 14 01:42:22.184065 kernel: CPU topo: Max. threads per core: 1 Jan 14 01:42:22.184073 kernel: CPU topo: Num. cores per package: 2 Jan 14 01:42:22.184081 kernel: CPU topo: Num. threads per package: 2 Jan 14 01:42:22.184090 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 14 01:42:22.184098 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 01:42:22.184105 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 14 01:42:22.184113 kernel: kvm-guest: setup PV sched yield Jan 14 01:42:22.184121 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 14 01:42:22.184128 kernel: Booting paravirtualized kernel on KVM Jan 14 01:42:22.184136 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 01:42:22.184143 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 01:42:22.184153 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 14 01:42:22.184161 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 14 01:42:22.184168 kernel: pcpu-alloc: [0] 0 1 Jan 14 01:42:22.184176 kernel: kvm-guest: PV spinlocks enabled Jan 14 01:42:22.184183 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 01:42:22.184192 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:42:22.184202 kernel: random: crng init done Jan 14 01:42:22.184209 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 01:42:22.184217 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 01:42:22.184224 kernel: Fallback order for Node 0: 0 Jan 14 01:42:22.184232 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jan 14 01:42:22.184239 kernel: Policy zone: Normal Jan 14 01:42:22.184262 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 01:42:22.184272 kernel: software IO TLB: area num 2. Jan 14 01:42:22.184280 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 01:42:22.184288 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 01:42:22.184295 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 01:42:22.184303 kernel: Dynamic Preempt: voluntary Jan 14 01:42:22.184310 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 01:42:22.184319 kernel: rcu: RCU event tracing is enabled. Jan 14 01:42:22.184326 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 01:42:22.184336 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 01:42:22.184344 kernel: Rude variant of Tasks RCU enabled. Jan 14 01:42:22.184351 kernel: Tracing variant of Tasks RCU enabled. Jan 14 01:42:22.184359 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 01:42:22.184366 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 01:42:22.184374 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:42:22.184391 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:42:22.184399 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:42:22.184407 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 14 01:42:22.184417 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 01:42:22.184425 kernel: Console: colour VGA+ 80x25 Jan 14 01:42:22.184433 kernel: printk: legacy console [tty0] enabled Jan 14 01:42:22.184441 kernel: printk: legacy console [ttyS0] enabled Jan 14 01:42:22.184449 kernel: ACPI: Core revision 20240827 Jan 14 01:42:22.184459 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 14 01:42:22.184467 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 01:42:22.184475 kernel: x2apic enabled Jan 14 01:42:22.184483 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 01:42:22.184523 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 14 01:42:22.184532 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 14 01:42:22.184540 kernel: kvm-guest: setup PV IPIs Jan 14 01:42:22.184551 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 14 01:42:22.184559 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 14 01:42:22.184567 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jan 14 01:42:22.184575 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 01:42:22.184583 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 14 01:42:22.184591 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 14 01:42:22.184599 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 01:42:22.184609 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 01:42:22.184617 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 01:42:22.184625 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 14 01:42:22.184633 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 14 01:42:22.184641 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 14 01:42:22.184649 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 14 01:42:22.184660 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 14 01:42:22.184668 kernel: active return thunk: srso_alias_return_thunk Jan 14 01:42:22.184676 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 14 01:42:22.184684 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 14 01:42:22.184692 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 01:42:22.184700 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 01:42:22.184708 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 01:42:22.184718 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 01:42:22.184726 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 14 01:42:22.184734 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 01:42:22.184742 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 14 01:42:22.184750 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 14 01:42:22.184758 kernel: Freeing SMP alternatives memory: 32K Jan 14 01:42:22.184766 kernel: pid_max: default: 32768 minimum: 301 Jan 14 01:42:22.184776 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 01:42:22.184784 kernel: landlock: Up and running. Jan 14 01:42:22.184792 kernel: SELinux: Initializing. Jan 14 01:42:22.184800 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 01:42:22.184808 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 01:42:22.184816 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 14 01:42:22.184824 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 14 01:42:22.184834 kernel: ... version: 0 Jan 14 01:42:22.184842 kernel: ... bit width: 48 Jan 14 01:42:22.184850 kernel: ... generic registers: 6 Jan 14 01:42:22.184858 kernel: ... value mask: 0000ffffffffffff Jan 14 01:42:22.184866 kernel: ... max period: 00007fffffffffff Jan 14 01:42:22.184873 kernel: ... fixed-purpose events: 0 Jan 14 01:42:22.184881 kernel: ... event mask: 000000000000003f Jan 14 01:42:22.184889 kernel: signal: max sigframe size: 3376 Jan 14 01:42:22.184899 kernel: rcu: Hierarchical SRCU implementation. Jan 14 01:42:22.184907 kernel: rcu: Max phase no-delay instances is 400. Jan 14 01:42:22.184915 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 01:42:22.184923 kernel: smp: Bringing up secondary CPUs ... Jan 14 01:42:22.184931 kernel: smpboot: x86: Booting SMP configuration: Jan 14 01:42:22.184939 kernel: .... node #0, CPUs: #1 Jan 14 01:42:22.184946 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 01:42:22.184956 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jan 14 01:42:22.184965 kernel: Memory: 3978192K/4193772K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 210904K reserved, 0K cma-reserved) Jan 14 01:42:22.184973 kernel: devtmpfs: initialized Jan 14 01:42:22.184980 kernel: x86/mm: Memory block size: 128MB Jan 14 01:42:22.184989 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 01:42:22.184996 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 01:42:22.185004 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 01:42:22.185014 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 01:42:22.185022 kernel: audit: initializing netlink subsys (disabled) Jan 14 01:42:22.185030 kernel: audit: type=2000 audit(1768354939.192:1): state=initialized audit_enabled=0 res=1 Jan 14 01:42:22.185038 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 01:42:22.185046 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 01:42:22.185054 kernel: cpuidle: using governor menu Jan 14 01:42:22.185062 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 01:42:22.185072 kernel: dca service started, version 1.12.1 Jan 14 01:42:22.185080 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 14 01:42:22.185088 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 14 01:42:22.185095 kernel: PCI: Using configuration type 1 for base access Jan 14 01:42:22.185103 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 01:42:22.185111 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 01:42:22.185118 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 01:42:22.185128 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 01:42:22.185135 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 01:42:22.185143 kernel: ACPI: Added _OSI(Module Device) Jan 14 01:42:22.185150 kernel: ACPI: Added _OSI(Processor Device) Jan 14 01:42:22.185158 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 01:42:22.185166 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 01:42:22.185173 kernel: ACPI: Interpreter enabled Jan 14 01:42:22.185183 kernel: ACPI: PM: (supports S0 S3 S5) Jan 14 01:42:22.185191 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 01:42:22.185198 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 01:42:22.185206 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 01:42:22.185213 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 01:42:22.185221 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 01:42:22.185491 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 01:42:22.185686 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 14 01:42:22.185868 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 14 01:42:22.185879 kernel: PCI host bridge to bus 0000:00 Jan 14 01:42:22.186058 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 01:42:22.186269 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 01:42:22.186449 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 01:42:22.186612 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 14 01:42:22.186772 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 14 01:42:22.186974 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 14 01:42:22.187137 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 01:42:22.187360 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 01:42:22.187554 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 01:42:22.187730 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 14 01:42:22.187904 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 14 01:42:22.188083 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 14 01:42:22.188312 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 01:42:22.188507 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 14 01:42:22.188682 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jan 14 01:42:22.188856 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 14 01:42:22.189031 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 14 01:42:22.189214 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 01:42:22.189621 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 14 01:42:22.189834 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 14 01:42:22.190014 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 14 01:42:22.190188 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 14 01:42:22.190444 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 01:42:22.190624 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 01:42:22.190997 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 01:42:22.191179 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jan 14 01:42:22.191375 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jan 14 01:42:22.191560 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 01:42:22.191734 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 14 01:42:22.191746 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 01:42:22.191758 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 01:42:22.191767 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 01:42:22.191775 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 01:42:22.191784 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 01:42:22.191792 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 01:42:22.191800 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 01:42:22.191808 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 01:42:22.191819 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 01:42:22.191827 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 01:42:22.191835 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 01:42:22.191843 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 01:42:22.191852 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 01:42:22.191860 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 01:42:22.191868 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 01:42:22.191878 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 01:42:22.191887 kernel: iommu: Default domain type: Translated Jan 14 01:42:22.191895 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 01:42:22.191903 kernel: PCI: Using ACPI for IRQ routing Jan 14 01:42:22.191911 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 01:42:22.191919 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 14 01:42:22.191928 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 14 01:42:22.192103 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 01:42:22.192315 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 01:42:22.192495 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 01:42:22.192506 kernel: vgaarb: loaded Jan 14 01:42:22.192514 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 14 01:42:22.192522 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 14 01:42:22.192530 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 01:42:22.192542 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 01:42:22.192550 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 01:42:22.192559 kernel: pnp: PnP ACPI init Jan 14 01:42:22.192756 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 14 01:42:22.192769 kernel: pnp: PnP ACPI: found 5 devices Jan 14 01:42:22.192777 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 01:42:22.192785 kernel: NET: Registered PF_INET protocol family Jan 14 01:42:22.192797 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 01:42:22.192805 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 01:42:22.192813 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 01:42:22.192821 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 01:42:22.192829 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 01:42:22.192837 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 01:42:22.192845 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 01:42:22.192855 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 01:42:22.192863 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 01:42:22.192871 kernel: NET: Registered PF_XDP protocol family Jan 14 01:42:22.193034 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 01:42:22.193197 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 01:42:22.193448 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 01:42:22.193617 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 14 01:42:22.193778 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 14 01:42:22.193937 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 14 01:42:22.193948 kernel: PCI: CLS 0 bytes, default 64 Jan 14 01:42:22.193956 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 01:42:22.193964 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 14 01:42:22.193972 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 14 01:42:22.193984 kernel: Initialise system trusted keyrings Jan 14 01:42:22.193992 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 01:42:22.193999 kernel: Key type asymmetric registered Jan 14 01:42:22.194007 kernel: Asymmetric key parser 'x509' registered Jan 14 01:42:22.194015 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 01:42:22.194023 kernel: io scheduler mq-deadline registered Jan 14 01:42:22.194031 kernel: io scheduler kyber registered Jan 14 01:42:22.194041 kernel: io scheduler bfq registered Jan 14 01:42:22.194049 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 01:42:22.194058 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 01:42:22.194066 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 01:42:22.194074 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 01:42:22.194082 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:42:22.194090 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 01:42:22.194100 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 01:42:22.194108 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 01:42:22.194116 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 01:42:22.194332 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 14 01:42:22.194506 kernel: rtc_cmos 00:03: registered as rtc0 Jan 14 01:42:22.194673 kernel: rtc_cmos 00:03: setting system clock to 2026-01-14T01:42:20 UTC (1768354940) Jan 14 01:42:22.194840 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 14 01:42:22.194855 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 14 01:42:22.194863 kernel: NET: Registered PF_INET6 protocol family Jan 14 01:42:22.194871 kernel: Segment Routing with IPv6 Jan 14 01:42:22.194879 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 01:42:22.194887 kernel: NET: Registered PF_PACKET protocol family Jan 14 01:42:22.194895 kernel: Key type dns_resolver registered Jan 14 01:42:22.194903 kernel: IPI shorthand broadcast: enabled Jan 14 01:42:22.194914 kernel: sched_clock: Marking stable (1809004380, 361737795)->(2264017207, -93275032) Jan 14 01:42:22.194922 kernel: registered taskstats version 1 Jan 14 01:42:22.194930 kernel: Loading compiled-in X.509 certificates Jan 14 01:42:22.194938 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e43fcdb17feb86efe6ca4b76910b93467fb95f4f' Jan 14 01:42:22.194946 kernel: Demotion targets for Node 0: null Jan 14 01:42:22.194953 kernel: Key type .fscrypt registered Jan 14 01:42:22.194961 kernel: Key type fscrypt-provisioning registered Jan 14 01:42:22.194972 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 01:42:22.194980 kernel: ima: Allocated hash algorithm: sha1 Jan 14 01:42:22.194988 kernel: ima: No architecture policies found Jan 14 01:42:22.194995 kernel: clk: Disabling unused clocks Jan 14 01:42:22.195003 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 01:42:22.195011 kernel: Write protecting the kernel read-only data: 47104k Jan 14 01:42:22.195019 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 01:42:22.195029 kernel: Run /init as init process Jan 14 01:42:22.195037 kernel: with arguments: Jan 14 01:42:22.195045 kernel: /init Jan 14 01:42:22.195053 kernel: with environment: Jan 14 01:42:22.195061 kernel: HOME=/ Jan 14 01:42:22.195083 kernel: TERM=linux Jan 14 01:42:22.195094 kernel: SCSI subsystem initialized Jan 14 01:42:22.195104 kernel: libata version 3.00 loaded. Jan 14 01:42:22.195300 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 01:42:22.195313 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 01:42:22.195489 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 01:42:22.195664 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 01:42:22.195838 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 01:42:22.196042 kernel: scsi host0: ahci Jan 14 01:42:22.196333 kernel: scsi host1: ahci Jan 14 01:42:22.196792 kernel: scsi host2: ahci Jan 14 01:42:22.196987 kernel: scsi host3: ahci Jan 14 01:42:22.197178 kernel: scsi host4: ahci Jan 14 01:42:22.197393 kernel: scsi host5: ahci Jan 14 01:42:22.197411 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Jan 14 01:42:22.197420 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Jan 14 01:42:22.197429 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Jan 14 01:42:22.197437 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Jan 14 01:42:22.197446 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Jan 14 01:42:22.197454 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Jan 14 01:42:22.197465 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 14 01:42:22.197473 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 01:42:22.197482 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 01:42:22.197490 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 01:42:22.197498 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 01:42:22.197507 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 01:42:22.197696 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jan 14 01:42:22.197889 kernel: scsi host6: Virtio SCSI HBA Jan 14 01:42:22.198097 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 14 01:42:22.198320 kernel: sd 6:0:0:0: Power-on or device reset occurred Jan 14 01:42:22.198520 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 14 01:42:22.198902 kernel: sd 6:0:0:0: [sda] Write Protect is off Jan 14 01:42:22.199101 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 14 01:42:22.199315 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 14 01:42:22.199328 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 01:42:22.199336 kernel: GPT:25804799 != 167739391 Jan 14 01:42:22.199345 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 01:42:22.199353 kernel: GPT:25804799 != 167739391 Jan 14 01:42:22.199362 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 01:42:22.199374 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 01:42:22.199605 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Jan 14 01:42:22.199618 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 01:42:22.199627 kernel: device-mapper: uevent: version 1.0.3 Jan 14 01:42:22.199635 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 01:42:22.199644 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 01:42:22.199652 kernel: raid6: avx2x4 gen() 27659 MB/s Jan 14 01:42:22.199665 kernel: raid6: avx2x2 gen() 25989 MB/s Jan 14 01:42:22.199673 kernel: raid6: avx2x1 gen() 18317 MB/s Jan 14 01:42:22.199683 kernel: raid6: using algorithm avx2x4 gen() 27659 MB/s Jan 14 01:42:22.199691 kernel: raid6: .... xor() 3492 MB/s, rmw enabled Jan 14 01:42:22.199702 kernel: raid6: using avx2x2 recovery algorithm Jan 14 01:42:22.199710 kernel: xor: automatically using best checksumming function avx Jan 14 01:42:22.199719 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 01:42:22.199727 kernel: BTRFS: device fsid cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (167) Jan 14 01:42:22.199736 kernel: BTRFS info (device dm-0): first mount of filesystem cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 Jan 14 01:42:22.199745 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:42:22.199755 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 14 01:42:22.199766 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 01:42:22.199774 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 01:42:22.199782 kernel: loop: module loaded Jan 14 01:42:22.199791 kernel: loop0: detected capacity change from 0 to 100544 Jan 14 01:42:22.199820 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 01:42:22.199830 systemd[1]: Successfully made /usr/ read-only. Jan 14 01:42:22.199841 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:42:22.199852 systemd[1]: Detected virtualization kvm. Jan 14 01:42:22.199861 systemd[1]: Detected architecture x86-64. Jan 14 01:42:22.199870 systemd[1]: Running in initrd. Jan 14 01:42:22.199878 systemd[1]: No hostname configured, using default hostname. Jan 14 01:42:22.199887 systemd[1]: Hostname set to . Jan 14 01:42:22.199896 systemd[1]: Initializing machine ID from random generator. Jan 14 01:42:22.199907 systemd[1]: Queued start job for default target initrd.target. Jan 14 01:42:22.199916 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:42:22.199925 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:42:22.199933 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:42:22.199943 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 01:42:22.199952 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:42:22.199964 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 01:42:22.199973 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 01:42:22.199982 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:42:22.199991 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:42:22.200000 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:42:22.200009 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:42:22.200020 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:42:22.200028 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:42:22.200037 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:42:22.200046 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:42:22.200055 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:42:22.200064 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:42:22.200074 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 01:42:22.200085 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 01:42:22.200094 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:42:22.200103 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:42:22.200111 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:42:22.200120 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:42:22.200130 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 01:42:22.200141 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 01:42:22.200150 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:42:22.200159 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 01:42:22.200168 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 01:42:22.200177 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 01:42:22.200186 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:42:22.200195 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:42:22.200207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:42:22.200216 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 01:42:22.200275 systemd-journald[304]: Collecting audit messages is enabled. Jan 14 01:42:22.200301 kernel: audit: type=1130 audit(1768354942.173:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.200311 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:42:22.200320 systemd-journald[304]: Journal started Jan 14 01:42:22.200341 systemd-journald[304]: Runtime Journal (/run/log/journal/8881fd50f14a46af93395fc72c5561a4) is 8M, max 78.1M, 70.1M free. Jan 14 01:42:22.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.212398 kernel: audit: type=1130 audit(1768354942.201:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.212468 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:42:22.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.214206 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 01:42:22.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.227870 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 01:42:22.231994 kernel: audit: type=1130 audit(1768354942.219:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.232013 kernel: Bridge firewalling registered Jan 14 01:42:22.229420 systemd-modules-load[306]: Inserted module 'br_netfilter' Jan 14 01:42:22.240489 kernel: audit: type=1130 audit(1768354942.232:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.232904 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:42:22.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.246380 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:42:22.250699 kernel: audit: type=1130 audit(1768354942.241:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.252153 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:42:22.262542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:42:22.360680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:42:22.377353 kernel: audit: type=1130 audit(1768354942.361:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.377380 kernel: audit: type=1130 audit(1768354942.370:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.369490 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:42:22.369705 systemd-tmpfiles[318]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 01:42:22.387817 kernel: audit: type=1130 audit(1768354942.379:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.378347 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:42:22.398384 kernel: audit: type=1130 audit(1768354942.388:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.379885 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:42:22.392391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 01:42:22.407000 audit: BPF prog-id=6 op=LOAD Jan 14 01:42:22.408902 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:42:22.412393 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:42:22.433979 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:42:22.437542 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 01:42:22.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.447948 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:42:22.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.469831 dracut-cmdline[341]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:42:22.487318 systemd-resolved[327]: Positive Trust Anchors: Jan 14 01:42:22.488506 systemd-resolved[327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:42:22.488513 systemd-resolved[327]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:42:22.488543 systemd-resolved[327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:42:22.519631 systemd-resolved[327]: Defaulting to hostname 'linux'. Jan 14 01:42:22.522425 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:42:22.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.524230 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:42:22.598335 kernel: Loading iSCSI transport class v2.0-870. Jan 14 01:42:22.635297 kernel: iscsi: registered transport (tcp) Jan 14 01:42:22.662044 kernel: iscsi: registered transport (qla4xxx) Jan 14 01:42:22.662155 kernel: QLogic iSCSI HBA Driver Jan 14 01:42:22.694518 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:42:22.715649 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:42:22.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.718508 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:42:22.781593 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 01:42:22.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.784507 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 01:42:22.787419 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 01:42:22.834684 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:42:22.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.836000 audit: BPF prog-id=7 op=LOAD Jan 14 01:42:22.836000 audit: BPF prog-id=8 op=LOAD Jan 14 01:42:22.840405 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:42:22.872223 systemd-udevd[583]: Using default interface naming scheme 'v257'. Jan 14 01:42:22.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.886757 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:42:22.890960 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 01:42:22.913457 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:42:22.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.917000 audit: BPF prog-id=9 op=LOAD Jan 14 01:42:22.918441 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:42:22.921238 dracut-pre-trigger[655]: rd.md=0: removing MD RAID activation Jan 14 01:42:22.958518 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:42:22.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.962049 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:42:22.973670 systemd-networkd[689]: lo: Link UP Jan 14 01:42:22.973675 systemd-networkd[689]: lo: Gained carrier Jan 14 01:42:22.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:22.975364 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:42:22.977032 systemd[1]: Reached target network.target - Network. Jan 14 01:42:23.066034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:42:23.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:23.069606 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 01:42:23.184160 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 14 01:42:23.202270 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 14 01:42:23.206265 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 01:42:23.245016 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 14 01:42:23.248036 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 14 01:42:23.251305 kernel: AES CTR mode by8 optimization enabled Jan 14 01:42:23.253489 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 01:42:23.422363 disk-uuid[803]: Primary Header is updated. Jan 14 01:42:23.422363 disk-uuid[803]: Secondary Entries is updated. Jan 14 01:42:23.422363 disk-uuid[803]: Secondary Header is updated. Jan 14 01:42:23.437111 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 14 01:42:23.459065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:42:23.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:23.459207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:42:23.462357 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:42:23.467880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:42:23.501512 systemd-networkd[689]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:42:23.501522 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:42:23.502443 systemd-networkd[689]: eth0: Link UP Jan 14 01:42:23.502686 systemd-networkd[689]: eth0: Gained carrier Jan 14 01:42:23.502695 systemd-networkd[689]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:42:23.639415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:42:23.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:23.653878 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 01:42:23.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:23.655401 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:42:23.656543 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:42:23.658219 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:42:23.662214 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 01:42:23.689662 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:42:23.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:24.357368 systemd-networkd[689]: eth0: DHCPv4 address 172.239.193.229/24, gateway 172.239.193.1 acquired from 23.205.167.145 Jan 14 01:42:24.511020 disk-uuid[810]: Warning: The kernel is still using the old partition table. Jan 14 01:42:24.511020 disk-uuid[810]: The new table will be used at the next reboot or after you Jan 14 01:42:24.511020 disk-uuid[810]: run partprobe(8) or kpartx(8) Jan 14 01:42:24.511020 disk-uuid[810]: The operation has completed successfully. Jan 14 01:42:24.521803 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 01:42:24.521999 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 01:42:24.540732 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 14 01:42:24.540754 kernel: audit: type=1130 audit(1768354944.523:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:24.540775 kernel: audit: type=1131 audit(1768354944.523:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:24.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:24.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:24.526418 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 01:42:24.569907 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (848) Jan 14 01:42:24.569952 kernel: BTRFS info (device sda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:42:24.573361 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:42:24.582970 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 01:42:24.582998 kernel: BTRFS info (device sda6): turning on async discard Jan 14 01:42:24.583014 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 01:42:24.595271 kernel: BTRFS info (device sda6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:42:24.596343 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 01:42:24.605722 kernel: audit: type=1130 audit(1768354944.596:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:24.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:24.599588 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 01:42:24.731297 ignition[867]: Ignition 2.24.0 Jan 14 01:42:24.731315 ignition[867]: Stage: fetch-offline Jan 14 01:42:24.731363 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:42:24.731376 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 14 01:42:24.742809 kernel: audit: type=1130 audit(1768354944.734:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:24.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:24.733487 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:42:24.731465 ignition[867]: parsed url from cmdline: "" Jan 14 01:42:24.737418 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 01:42:24.731469 ignition[867]: no config URL provided Jan 14 01:42:24.731475 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:42:24.731487 ignition[867]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:42:24.731492 ignition[867]: failed to fetch config: resource requires networking Jan 14 01:42:24.731793 ignition[867]: Ignition finished successfully Jan 14 01:42:24.762922 ignition[873]: Ignition 2.24.0 Jan 14 01:42:24.762929 ignition[873]: Stage: fetch Jan 14 01:42:24.763061 ignition[873]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:42:24.763071 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 14 01:42:24.763155 ignition[873]: parsed url from cmdline: "" Jan 14 01:42:24.763159 ignition[873]: no config URL provided Jan 14 01:42:24.763168 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:42:24.763176 ignition[873]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:42:24.763210 ignition[873]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 14 01:42:24.868145 ignition[873]: PUT result: OK Jan 14 01:42:24.868266 ignition[873]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 14 01:42:24.984965 ignition[873]: GET result: OK Jan 14 01:42:24.985085 ignition[873]: parsing config with SHA512: ab8810c513ea6de27c7c62777a6066de7573908cae6307b1c597f46ea46dd151980f32082c7f7d9b3791e7768efb1f51df7d6f27dbad41af8f76cf068d0e1eee Jan 14 01:42:24.991765 unknown[873]: fetched base config from "system" Jan 14 01:42:24.991775 unknown[873]: fetched base config from "system" Jan 14 01:42:24.992047 ignition[873]: fetch: fetch complete Jan 14 01:42:24.991781 unknown[873]: fetched user config from "akamai" Jan 14 01:42:24.992053 ignition[873]: fetch: fetch passed Jan 14 01:42:24.995315 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 01:42:25.005780 kernel: audit: type=1130 audit(1768354944.996:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:24.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:24.992106 ignition[873]: Ignition finished successfully Jan 14 01:42:24.998472 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 01:42:25.024887 ignition[879]: Ignition 2.24.0 Jan 14 01:42:25.024901 ignition[879]: Stage: kargs Jan 14 01:42:25.025038 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:42:25.025049 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 14 01:42:25.026026 ignition[879]: kargs: kargs passed Jan 14 01:42:25.026069 ignition[879]: Ignition finished successfully Jan 14 01:42:25.030835 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 01:42:25.040429 kernel: audit: type=1130 audit(1768354945.032:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:25.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:25.034399 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 01:42:25.068425 ignition[885]: Ignition 2.24.0 Jan 14 01:42:25.068437 ignition[885]: Stage: disks Jan 14 01:42:25.068568 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:42:25.068768 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 14 01:42:25.069322 ignition[885]: disks: disks passed Jan 14 01:42:25.072550 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 01:42:25.081549 kernel: audit: type=1130 audit(1768354945.073:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:25.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:25.069365 ignition[885]: Ignition finished successfully Jan 14 01:42:25.074306 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 01:42:25.082233 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 01:42:25.083706 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:42:25.085403 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:42:25.087025 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:42:25.089429 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 01:42:25.139984 systemd-fsck[893]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 01:42:25.143459 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 01:42:25.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:25.148351 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 01:42:25.155912 kernel: audit: type=1130 audit(1768354945.145:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:25.192420 systemd-networkd[689]: eth0: Gained IPv6LL Jan 14 01:42:25.270263 kernel: EXT4-fs (sda9): mounted filesystem 9c98b0a3-27fc-41c4-a169-349b38bd9ceb r/w with ordered data mode. Quota mode: none. Jan 14 01:42:25.271145 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 01:42:25.272773 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 01:42:25.275762 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:42:25.279333 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 01:42:25.281336 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 01:42:25.281378 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 01:42:25.281405 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:42:25.291050 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 01:42:25.294067 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 01:42:25.303589 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (901) Jan 14 01:42:25.303627 kernel: BTRFS info (device sda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:42:25.309784 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:42:25.318803 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 01:42:25.318828 kernel: BTRFS info (device sda6): turning on async discard Jan 14 01:42:25.318841 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 01:42:25.324109 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:42:25.480052 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 01:42:25.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:25.484338 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 01:42:25.490257 kernel: audit: type=1130 audit(1768354945.480:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:25.493300 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 01:42:25.506269 kernel: BTRFS info (device sda6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:42:25.533513 ignition[999]: INFO : Ignition 2.24.0 Jan 14 01:42:25.533933 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 01:42:25.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:25.544811 ignition[999]: INFO : Stage: mount Jan 14 01:42:25.544811 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:42:25.544811 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 14 01:42:25.544811 ignition[999]: INFO : mount: mount passed Jan 14 01:42:25.544811 ignition[999]: INFO : Ignition finished successfully Jan 14 01:42:25.549735 kernel: audit: type=1130 audit(1768354945.536:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:25.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:25.545644 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 01:42:25.550346 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 01:42:25.554711 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 01:42:25.564942 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:42:25.590269 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1011) Jan 14 01:42:25.594640 kernel: BTRFS info (device sda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:42:25.594666 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:42:25.603962 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 01:42:25.603988 kernel: BTRFS info (device sda6): turning on async discard Jan 14 01:42:25.604002 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 01:42:25.608600 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:42:25.638215 ignition[1027]: INFO : Ignition 2.24.0 Jan 14 01:42:25.638215 ignition[1027]: INFO : Stage: files Jan 14 01:42:25.639853 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:42:25.639853 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 14 01:42:25.639853 ignition[1027]: DEBUG : files: compiled without relabeling support, skipping Jan 14 01:42:25.643364 ignition[1027]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 01:42:25.643364 ignition[1027]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 01:42:25.646888 ignition[1027]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 01:42:25.670234 ignition[1027]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 01:42:25.670234 ignition[1027]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 01:42:25.670234 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:42:25.670234 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 01:42:25.650172 unknown[1027]: wrote ssh authorized keys file for user: core Jan 14 01:42:25.855891 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 01:42:26.021751 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:42:26.023146 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 01:42:26.023146 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 01:42:26.023146 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:42:26.023146 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:42:26.023146 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:42:26.023146 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:42:26.023146 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:42:26.023146 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:42:26.032173 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:42:26.032173 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:42:26.032173 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:42:26.032173 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:42:26.032173 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:42:26.032173 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 14 01:42:26.533386 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 01:42:26.836721 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:42:26.836721 ignition[1027]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 01:42:26.836721 ignition[1027]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:42:26.841216 ignition[1027]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:42:26.841216 ignition[1027]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 01:42:26.841216 ignition[1027]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 01:42:26.841216 ignition[1027]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 14 01:42:26.841216 ignition[1027]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 14 01:42:26.841216 ignition[1027]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 01:42:26.841216 ignition[1027]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 14 01:42:26.841216 ignition[1027]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 01:42:26.841216 ignition[1027]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:42:26.841216 ignition[1027]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:42:26.841216 ignition[1027]: INFO : files: files passed Jan 14 01:42:26.841216 ignition[1027]: INFO : Ignition finished successfully Jan 14 01:42:26.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:26.841236 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 01:42:26.846396 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 01:42:26.851445 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 01:42:26.861346 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 01:42:26.866451 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 01:42:26.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:26.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:26.884134 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:42:26.884134 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:42:26.887686 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:42:26.890563 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:42:26.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:26.891639 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 01:42:26.894304 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 01:42:26.962417 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 01:42:26.962566 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 01:42:26.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:26.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:26.964845 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 01:42:26.965777 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 01:42:26.968575 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 01:42:26.969485 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 01:42:26.996830 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:42:26.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:26.999741 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 01:42:27.022175 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:42:27.023595 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:42:27.024484 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:42:27.026144 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 01:42:27.027725 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 01:42:27.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.027884 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:42:27.029584 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 01:42:27.030641 systemd[1]: Stopped target basic.target - Basic System. Jan 14 01:42:27.032104 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 01:42:27.033611 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:42:27.035023 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 01:42:27.036604 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:42:27.038174 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 01:42:27.039808 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:42:27.041594 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 01:42:27.043200 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 01:42:27.045051 systemd[1]: Stopped target swap.target - Swaps. Jan 14 01:42:27.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.046683 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 01:42:27.046785 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:42:27.048382 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:42:27.049745 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:42:27.051158 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 01:42:27.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.075142 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:42:27.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.076281 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 01:42:27.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.076389 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 01:42:27.078438 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 01:42:27.078594 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:42:27.079512 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 01:42:27.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.079611 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 01:42:27.082338 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 01:42:27.084837 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 01:42:27.086386 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:42:27.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.089580 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 01:42:27.091721 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 01:42:27.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.091882 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:42:27.094409 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 01:42:27.094742 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:42:27.097139 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 01:42:27.097303 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:42:27.114690 ignition[1083]: INFO : Ignition 2.24.0 Jan 14 01:42:27.114690 ignition[1083]: INFO : Stage: umount Jan 14 01:42:27.114690 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:42:27.114690 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 14 01:42:27.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.113510 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 01:42:27.125105 ignition[1083]: INFO : umount: umount passed Jan 14 01:42:27.125105 ignition[1083]: INFO : Ignition finished successfully Jan 14 01:42:27.113817 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 01:42:27.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.119409 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 01:42:27.119529 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 01:42:27.124263 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 01:42:27.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.124360 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 01:42:27.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.127873 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 01:42:27.127927 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 01:42:27.130646 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 01:42:27.130703 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 01:42:27.131891 systemd[1]: Stopped target network.target - Network. Jan 14 01:42:27.132552 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 01:42:27.132608 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:42:27.134355 systemd[1]: Stopped target paths.target - Path Units. Jan 14 01:42:27.135025 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 01:42:27.138439 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:42:27.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.139627 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 01:42:27.140379 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 01:42:27.144372 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 01:42:27.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.144422 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:42:27.145320 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 01:42:27.145366 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:42:27.146087 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 01:42:27.146123 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:42:27.149355 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 01:42:27.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.149414 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 01:42:27.150751 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 01:42:27.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.150802 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 01:42:27.153601 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 01:42:27.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.155181 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 01:42:27.158629 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 01:42:27.172000 audit: BPF prog-id=6 op=UNLOAD Jan 14 01:42:27.161541 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 01:42:27.172000 audit: BPF prog-id=9 op=UNLOAD Jan 14 01:42:27.161663 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 01:42:27.164147 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 01:42:27.164218 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 01:42:27.166047 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 01:42:27.166186 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 01:42:27.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.168964 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 01:42:27.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.169102 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 01:42:27.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.172702 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 01:42:27.174356 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 01:42:27.174402 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:42:27.176610 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 01:42:27.178055 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 01:42:27.178117 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:42:27.181522 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 01:42:27.181574 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:42:27.184314 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 01:42:27.184369 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 01:42:27.185230 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:42:27.207189 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 01:42:27.207744 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:42:27.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.209532 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 01:42:27.209582 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 01:42:27.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.212434 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 01:42:27.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.212473 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:42:27.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.213171 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 01:42:27.213229 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:42:27.214775 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 01:42:27.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.214829 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 01:42:27.216414 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 01:42:27.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.216467 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:42:27.218919 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 01:42:27.220722 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 01:42:27.220788 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:42:27.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.223812 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 01:42:27.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:27.223889 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:42:27.225474 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:42:27.225531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:42:27.227808 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 01:42:27.230344 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 01:42:27.253061 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 01:42:27.253185 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 01:42:27.255178 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 01:42:27.257218 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 01:42:27.270944 systemd[1]: Switching root. Jan 14 01:42:27.308562 systemd-journald[304]: Journal stopped Jan 14 01:42:28.584428 systemd-journald[304]: Received SIGTERM from PID 1 (systemd). Jan 14 01:42:28.584461 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 01:42:28.584475 kernel: SELinux: policy capability open_perms=1 Jan 14 01:42:28.584485 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 01:42:28.584495 kernel: SELinux: policy capability always_check_network=0 Jan 14 01:42:28.584507 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 01:42:28.584517 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 01:42:28.584527 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 01:42:28.584539 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 01:42:28.584548 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 01:42:28.584558 systemd[1]: Successfully loaded SELinux policy in 75.829ms. Jan 14 01:42:28.584572 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.729ms. Jan 14 01:42:28.584584 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:42:28.584595 systemd[1]: Detected virtualization kvm. Jan 14 01:42:28.584608 systemd[1]: Detected architecture x86-64. Jan 14 01:42:28.584619 systemd[1]: Detected first boot. Jan 14 01:42:28.584630 systemd[1]: Initializing machine ID from random generator. Jan 14 01:42:28.584641 zram_generator::config[1130]: No configuration found. Jan 14 01:42:28.584652 kernel: Guest personality initialized and is inactive Jan 14 01:42:28.584662 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 01:42:28.584675 kernel: Initialized host personality Jan 14 01:42:28.584685 kernel: NET: Registered PF_VSOCK protocol family Jan 14 01:42:28.584695 systemd[1]: Populated /etc with preset unit settings. Jan 14 01:42:28.584706 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 01:42:28.584717 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 01:42:28.584727 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 01:42:28.584742 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 01:42:28.584756 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 01:42:28.584766 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 01:42:28.584777 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 01:42:28.584788 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 01:42:28.584799 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 01:42:28.584812 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 01:42:28.584825 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 01:42:28.584836 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:42:28.584847 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:42:28.584857 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 01:42:28.584868 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 01:42:28.584879 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 01:42:28.584892 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:42:28.584906 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 01:42:28.584917 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:42:28.584928 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:42:28.584939 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 01:42:28.584949 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 01:42:28.584963 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 01:42:28.584974 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 01:42:28.584985 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:42:28.584996 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:42:28.585006 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 01:42:28.585017 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:42:28.585028 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:42:28.585041 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 01:42:28.585053 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 01:42:28.585065 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 01:42:28.585076 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:42:28.585091 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 01:42:28.585102 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:42:28.585113 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 01:42:28.585124 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 01:42:28.585135 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:42:28.585146 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:42:28.585159 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 01:42:28.585170 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 01:42:28.585181 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 01:42:28.585192 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 01:42:28.585203 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:42:28.585214 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 01:42:28.585225 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 01:42:28.585238 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 01:42:28.585270 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 01:42:28.585282 systemd[1]: Reached target machines.target - Containers. Jan 14 01:42:28.585293 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 01:42:28.585304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:42:28.585315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:42:28.585326 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 01:42:28.585340 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:42:28.585351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:42:28.585362 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:42:28.585373 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 01:42:28.585384 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:42:28.585395 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 01:42:28.585408 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 01:42:28.585419 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 01:42:28.585430 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 01:42:28.585441 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 01:42:28.585454 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:42:28.585465 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:42:28.585476 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:42:28.585490 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:42:28.585501 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 01:42:28.585512 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 01:42:28.585523 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:42:28.585534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:42:28.585545 kernel: ACPI: bus type drm_connector registered Jan 14 01:42:28.585558 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 01:42:28.585568 kernel: fuse: init (API version 7.41) Jan 14 01:42:28.585766 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 01:42:28.585777 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 01:42:28.585788 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 01:42:28.585799 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 01:42:28.585810 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 01:42:28.585823 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 01:42:28.585855 systemd-journald[1207]: Collecting audit messages is enabled. Jan 14 01:42:28.585880 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:42:28.585895 systemd-journald[1207]: Journal started Jan 14 01:42:28.585915 systemd-journald[1207]: Runtime Journal (/run/log/journal/9f0f0f0ec92b404c9a0c1d8d2d248586) is 8M, max 78.1M, 70.1M free. Jan 14 01:42:28.245000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 01:42:28.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.471000 audit: BPF prog-id=14 op=UNLOAD Jan 14 01:42:28.471000 audit: BPF prog-id=13 op=UNLOAD Jan 14 01:42:28.477000 audit: BPF prog-id=15 op=LOAD Jan 14 01:42:28.477000 audit: BPF prog-id=16 op=LOAD Jan 14 01:42:28.477000 audit: BPF prog-id=17 op=LOAD Jan 14 01:42:28.578000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:42:28.578000 audit[1207]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe6c07a2b0 a2=4000 a3=0 items=0 ppid=1 pid=1207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:28.578000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:42:28.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.129372 systemd[1]: Queued start job for default target multi-user.target. Jan 14 01:42:28.139058 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 01:42:28.139636 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 01:42:28.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.594258 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:42:28.594004 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 01:42:28.594220 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 01:42:28.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.595422 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:42:28.595702 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:42:28.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.597123 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:42:28.597414 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:42:28.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.598686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:42:28.599141 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:42:28.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.600556 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 01:42:28.600817 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 01:42:28.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.602151 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:42:28.602756 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:42:28.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.604195 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:42:28.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.605955 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:42:28.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.608684 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 01:42:28.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.610176 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 01:42:28.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.627230 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:42:28.629949 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 01:42:28.634347 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 01:42:28.638353 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 01:42:28.639406 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 01:42:28.639438 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:42:28.641427 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 01:42:28.642439 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:42:28.643514 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:42:28.651916 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 01:42:28.654129 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 01:42:28.655329 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:42:28.659399 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 01:42:28.662344 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:42:28.666099 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:42:28.669382 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 01:42:28.671411 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 01:42:28.675114 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 01:42:28.677701 systemd-journald[1207]: Time spent on flushing to /var/log/journal/9f0f0f0ec92b404c9a0c1d8d2d248586 is 70.674ms for 1119 entries. Jan 14 01:42:28.677701 systemd-journald[1207]: System Journal (/var/log/journal/9f0f0f0ec92b404c9a0c1d8d2d248586) is 8M, max 588.1M, 580.1M free. Jan 14 01:42:28.767007 systemd-journald[1207]: Received client request to flush runtime journal. Jan 14 01:42:28.767051 kernel: loop1: detected capacity change from 0 to 50784 Jan 14 01:42:28.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.676459 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 01:42:28.695362 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 01:42:28.696422 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 01:42:28.701392 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 01:42:28.734849 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:42:28.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.746490 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:42:28.769416 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 01:42:28.782298 kernel: loop2: detected capacity change from 0 to 8 Jan 14 01:42:28.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.781000 audit: BPF prog-id=18 op=LOAD Jan 14 01:42:28.781000 audit: BPF prog-id=19 op=LOAD Jan 14 01:42:28.781000 audit: BPF prog-id=20 op=LOAD Jan 14 01:42:28.778381 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 01:42:28.783042 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 01:42:28.785000 audit: BPF prog-id=21 op=LOAD Jan 14 01:42:28.787394 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:42:28.790401 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:42:28.791572 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 01:42:28.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.812925 kernel: loop3: detected capacity change from 0 to 111560 Jan 14 01:42:28.814000 audit: BPF prog-id=22 op=LOAD Jan 14 01:42:28.814000 audit: BPF prog-id=23 op=LOAD Jan 14 01:42:28.814000 audit: BPF prog-id=24 op=LOAD Jan 14 01:42:28.816644 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 01:42:28.820000 audit: BPF prog-id=25 op=LOAD Jan 14 01:42:28.821000 audit: BPF prog-id=26 op=LOAD Jan 14 01:42:28.821000 audit: BPF prog-id=27 op=LOAD Jan 14 01:42:28.823102 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 01:42:28.857268 kernel: loop4: detected capacity change from 0 to 229808 Jan 14 01:42:28.862616 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 14 01:42:28.862915 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 14 01:42:28.878195 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:42:28.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.913278 kernel: loop5: detected capacity change from 0 to 50784 Jan 14 01:42:28.919391 systemd-nsresourced[1274]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 01:42:28.924944 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 01:42:28.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.933497 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 01:42:28.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:28.941270 kernel: loop6: detected capacity change from 0 to 8 Jan 14 01:42:28.950290 kernel: loop7: detected capacity change from 0 to 111560 Jan 14 01:42:28.979266 kernel: loop1: detected capacity change from 0 to 229808 Jan 14 01:42:29.000394 (sd-merge)[1281]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Jan 14 01:42:29.019604 (sd-merge)[1281]: Merged extensions into '/usr'. Jan 14 01:42:29.030367 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 01:42:29.030387 systemd[1]: Reloading... Jan 14 01:42:29.032813 systemd-oomd[1268]: No swap; memory pressure usage will be degraded Jan 14 01:42:29.095605 systemd-resolved[1271]: Positive Trust Anchors: Jan 14 01:42:29.096007 systemd-resolved[1271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:42:29.096055 systemd-resolved[1271]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:42:29.096114 systemd-resolved[1271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:42:29.102594 systemd-resolved[1271]: Defaulting to hostname 'linux'. Jan 14 01:42:29.137273 zram_generator::config[1325]: No configuration found. Jan 14 01:42:29.323577 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 01:42:29.323775 systemd[1]: Reloading finished in 292 ms. Jan 14 01:42:29.359825 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 01:42:29.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:29.360858 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:42:29.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:29.361919 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 01:42:29.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:29.363003 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 01:42:29.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:29.368081 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:42:29.387744 systemd[1]: Starting ensure-sysext.service... Jan 14 01:42:29.391000 audit: BPF prog-id=8 op=UNLOAD Jan 14 01:42:29.391000 audit: BPF prog-id=7 op=UNLOAD Jan 14 01:42:29.392000 audit: BPF prog-id=28 op=LOAD Jan 14 01:42:29.392000 audit: BPF prog-id=29 op=LOAD Jan 14 01:42:29.391388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:42:29.394427 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:42:29.397000 audit: BPF prog-id=30 op=LOAD Jan 14 01:42:29.397000 audit: BPF prog-id=25 op=UNLOAD Jan 14 01:42:29.397000 audit: BPF prog-id=31 op=LOAD Jan 14 01:42:29.397000 audit: BPF prog-id=32 op=LOAD Jan 14 01:42:29.397000 audit: BPF prog-id=26 op=UNLOAD Jan 14 01:42:29.397000 audit: BPF prog-id=27 op=UNLOAD Jan 14 01:42:29.398000 audit: BPF prog-id=33 op=LOAD Jan 14 01:42:29.398000 audit: BPF prog-id=15 op=UNLOAD Jan 14 01:42:29.398000 audit: BPF prog-id=34 op=LOAD Jan 14 01:42:29.398000 audit: BPF prog-id=35 op=LOAD Jan 14 01:42:29.398000 audit: BPF prog-id=16 op=UNLOAD Jan 14 01:42:29.398000 audit: BPF prog-id=17 op=UNLOAD Jan 14 01:42:29.404000 audit: BPF prog-id=36 op=LOAD Jan 14 01:42:29.404000 audit: BPF prog-id=21 op=UNLOAD Jan 14 01:42:29.405000 audit: BPF prog-id=37 op=LOAD Jan 14 01:42:29.405000 audit: BPF prog-id=22 op=UNLOAD Jan 14 01:42:29.405000 audit: BPF prog-id=38 op=LOAD Jan 14 01:42:29.405000 audit: BPF prog-id=39 op=LOAD Jan 14 01:42:29.405000 audit: BPF prog-id=23 op=UNLOAD Jan 14 01:42:29.405000 audit: BPF prog-id=24 op=UNLOAD Jan 14 01:42:29.406000 audit: BPF prog-id=40 op=LOAD Jan 14 01:42:29.407000 audit: BPF prog-id=18 op=UNLOAD Jan 14 01:42:29.407000 audit: BPF prog-id=41 op=LOAD Jan 14 01:42:29.407000 audit: BPF prog-id=42 op=LOAD Jan 14 01:42:29.407000 audit: BPF prog-id=19 op=UNLOAD Jan 14 01:42:29.407000 audit: BPF prog-id=20 op=UNLOAD Jan 14 01:42:29.414548 systemd[1]: Reload requested from client PID 1368 ('systemctl') (unit ensure-sysext.service)... Jan 14 01:42:29.414567 systemd[1]: Reloading... Jan 14 01:42:29.432002 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 01:42:29.432044 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 01:42:29.432385 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 01:42:29.433654 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 14 01:42:29.433733 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 14 01:42:29.434344 systemd-udevd[1370]: Using default interface naming scheme 'v257'. Jan 14 01:42:29.445406 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:42:29.445817 systemd-tmpfiles[1369]: Skipping /boot Jan 14 01:42:29.471027 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:42:29.471115 systemd-tmpfiles[1369]: Skipping /boot Jan 14 01:42:29.539280 zram_generator::config[1425]: No configuration found. Jan 14 01:42:29.597273 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 01:42:29.634274 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 14 01:42:29.646262 kernel: ACPI: button: Power Button [PWRF] Jan 14 01:42:29.707305 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 01:42:29.712286 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 01:42:29.829168 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 01:42:29.829337 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 14 01:42:29.831740 systemd[1]: Reloading finished in 416 ms. Jan 14 01:42:29.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:29.842619 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:42:29.845238 kernel: kauditd_printk_skb: 144 callbacks suppressed Jan 14 01:42:29.845302 kernel: audit: type=1130 audit(1768354949.843:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:29.855400 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:42:29.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:29.864276 kernel: audit: type=1130 audit(1768354949.856:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:29.878262 kernel: audit: type=1334 audit(1768354949.868:184): prog-id=43 op=LOAD Jan 14 01:42:29.868000 audit: BPF prog-id=43 op=LOAD Jan 14 01:42:29.884272 kernel: audit: type=1334 audit(1768354949.868:185): prog-id=40 op=UNLOAD Jan 14 01:42:29.868000 audit: BPF prog-id=40 op=UNLOAD Jan 14 01:42:29.869000 audit: BPF prog-id=44 op=LOAD Jan 14 01:42:29.891265 kernel: audit: type=1334 audit(1768354949.869:186): prog-id=44 op=LOAD Jan 14 01:42:29.916700 kernel: audit: type=1334 audit(1768354949.869:187): prog-id=45 op=LOAD Jan 14 01:42:29.869000 audit: BPF prog-id=45 op=LOAD Jan 14 01:42:29.869000 audit: BPF prog-id=41 op=UNLOAD Jan 14 01:42:29.953268 kernel: audit: type=1334 audit(1768354949.869:188): prog-id=41 op=UNLOAD Jan 14 01:42:29.869000 audit: BPF prog-id=42 op=UNLOAD Jan 14 01:42:29.960272 kernel: audit: type=1334 audit(1768354949.869:189): prog-id=42 op=UNLOAD Jan 14 01:42:29.869000 audit: BPF prog-id=46 op=LOAD Jan 14 01:42:29.967605 kernel: audit: type=1334 audit(1768354949.869:190): prog-id=46 op=LOAD Jan 14 01:42:29.967652 kernel: EDAC MC: Ver: 3.0.0 Jan 14 01:42:29.970718 kernel: audit: type=1334 audit(1768354949.869:191): prog-id=30 op=UNLOAD Jan 14 01:42:29.869000 audit: BPF prog-id=30 op=UNLOAD Jan 14 01:42:29.871000 audit: BPF prog-id=47 op=LOAD Jan 14 01:42:29.871000 audit: BPF prog-id=48 op=LOAD Jan 14 01:42:29.871000 audit: BPF prog-id=31 op=UNLOAD Jan 14 01:42:29.871000 audit: BPF prog-id=32 op=UNLOAD Jan 14 01:42:29.873000 audit: BPF prog-id=49 op=LOAD Jan 14 01:42:29.873000 audit: BPF prog-id=36 op=UNLOAD Jan 14 01:42:29.874000 audit: BPF prog-id=50 op=LOAD Jan 14 01:42:29.874000 audit: BPF prog-id=51 op=LOAD Jan 14 01:42:29.874000 audit: BPF prog-id=28 op=UNLOAD Jan 14 01:42:29.876000 audit: BPF prog-id=29 op=UNLOAD Jan 14 01:42:29.877000 audit: BPF prog-id=52 op=LOAD Jan 14 01:42:29.877000 audit: BPF prog-id=33 op=UNLOAD Jan 14 01:42:29.877000 audit: BPF prog-id=53 op=LOAD Jan 14 01:42:29.877000 audit: BPF prog-id=54 op=LOAD Jan 14 01:42:29.877000 audit: BPF prog-id=34 op=UNLOAD Jan 14 01:42:29.877000 audit: BPF prog-id=35 op=UNLOAD Jan 14 01:42:29.877000 audit: BPF prog-id=55 op=LOAD Jan 14 01:42:29.880000 audit: BPF prog-id=37 op=UNLOAD Jan 14 01:42:29.880000 audit: BPF prog-id=56 op=LOAD Jan 14 01:42:29.880000 audit: BPF prog-id=57 op=LOAD Jan 14 01:42:29.880000 audit: BPF prog-id=38 op=UNLOAD Jan 14 01:42:29.880000 audit: BPF prog-id=39 op=UNLOAD Jan 14 01:42:29.986500 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:42:29.990490 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:42:29.995533 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 01:42:29.996446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:42:30.002361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:42:30.008493 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:42:30.010993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:42:30.012464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:42:30.013432 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:42:30.015531 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 01:42:30.021538 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 01:42:30.022329 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:42:30.025359 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 01:42:30.028000 audit: BPF prog-id=58 op=LOAD Jan 14 01:42:30.030508 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:42:30.038995 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 01:42:30.043906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:42:30.045749 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:42:30.049161 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:42:30.050059 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:42:30.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.056160 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:42:30.056764 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:42:30.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.067751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:42:30.068362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:42:30.075511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:42:30.084300 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:42:30.090322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:42:30.091552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:42:30.091725 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:42:30.091813 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:42:30.092375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:42:30.102590 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:42:30.102869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:42:30.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.108000 audit[1506]: SYSTEM_BOOT pid=1506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.109491 systemd[1]: Finished ensure-sysext.service. Jan 14 01:42:30.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.113213 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 01:42:30.125000 audit: BPF prog-id=59 op=LOAD Jan 14 01:42:30.127510 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 01:42:30.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.141958 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 01:42:30.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.156020 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 01:42:30.160783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:42:30.161072 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:42:30.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.163086 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:42:30.163723 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:42:30.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.166872 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 01:42:30.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.168562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:42:30.169081 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:42:30.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:30.177177 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:42:30.177264 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:42:30.177297 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:42:30.195000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:42:30.195000 audit[1545]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0fa72b80 a2=420 a3=0 items=0 ppid=1491 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:30.195000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:42:30.197100 augenrules[1545]: No rules Jan 14 01:42:30.197620 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:42:30.198492 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:42:30.257425 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 01:42:30.272913 systemd-networkd[1502]: lo: Link UP Jan 14 01:42:30.272922 systemd-networkd[1502]: lo: Gained carrier Jan 14 01:42:30.285933 systemd-networkd[1502]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:42:30.286192 systemd-networkd[1502]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:42:30.289305 systemd-networkd[1502]: eth0: Link UP Jan 14 01:42:30.289934 systemd-networkd[1502]: eth0: Gained carrier Jan 14 01:42:30.291522 systemd-networkd[1502]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:42:30.362108 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:42:30.364618 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:42:30.368365 systemd[1]: Reached target network.target - Network. Jan 14 01:42:30.369159 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 01:42:30.372131 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 01:42:30.374086 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 01:42:30.405456 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 01:42:30.607471 ldconfig[1496]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 01:42:30.611309 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 01:42:30.613715 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 01:42:30.637278 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 01:42:30.638277 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:42:30.639102 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 01:42:30.639940 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 01:42:30.640845 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 01:42:30.641757 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 01:42:30.642608 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 01:42:30.643427 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 01:42:30.644302 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 01:42:30.645038 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 01:42:30.645811 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 01:42:30.645862 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:42:30.646545 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:42:30.648061 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 01:42:30.650409 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 01:42:30.653909 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 01:42:30.654851 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 01:42:30.655619 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 01:42:30.658285 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 01:42:30.659335 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 01:42:30.660687 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 01:42:30.662176 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:42:30.662890 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:42:30.663626 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:42:30.663659 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:42:30.664684 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 01:42:30.668383 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 01:42:30.674469 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 01:42:30.679107 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 01:42:30.682982 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 01:42:30.693040 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 01:42:30.693785 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 01:42:30.696468 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 01:42:30.700449 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 01:42:30.707562 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 01:42:30.710418 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 01:42:30.716228 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 01:42:30.722821 jq[1567]: false Jan 14 01:42:30.725475 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 01:42:30.726213 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 01:42:30.726755 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 01:42:30.729884 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 01:42:30.742360 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 01:42:30.748351 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 01:42:30.750618 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 01:42:30.750909 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 01:42:30.752012 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jan 14 01:42:30.752024 oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jan 14 01:42:30.768179 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 01:42:30.768445 extend-filesystems[1568]: Found /dev/sda6 Jan 14 01:42:30.768527 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 01:42:30.771945 oslogin_cache_refresh[1569]: Failure getting users, quitting Jan 14 01:42:30.772628 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting Jan 14 01:42:30.772628 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:42:30.772628 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache Jan 14 01:42:30.771964 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:42:30.772006 oslogin_cache_refresh[1569]: Refreshing group entry cache Jan 14 01:42:30.773274 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting Jan 14 01:42:30.773274 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:42:30.772863 oslogin_cache_refresh[1569]: Failure getting groups, quitting Jan 14 01:42:30.772873 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:42:30.777896 extend-filesystems[1568]: Found /dev/sda9 Jan 14 01:42:30.779811 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 01:42:30.781232 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 01:42:30.786231 extend-filesystems[1568]: Checking size of /dev/sda9 Jan 14 01:42:30.801305 update_engine[1578]: I20260114 01:42:30.800333 1578 main.cc:92] Flatcar Update Engine starting Jan 14 01:42:30.805400 jq[1581]: true Jan 14 01:42:30.818381 extend-filesystems[1568]: Resized partition /dev/sda9 Jan 14 01:42:30.825276 tar[1588]: linux-amd64/LICENSE Jan 14 01:42:30.825276 tar[1588]: linux-amd64/helm Jan 14 01:42:30.826464 extend-filesystems[1617]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 01:42:30.838288 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Jan 14 01:42:30.840459 dbus-daemon[1565]: [system] SELinux support is enabled Jan 14 01:42:30.844146 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 01:42:30.847267 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 01:42:30.847301 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 01:42:30.848068 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 01:42:30.848089 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 01:42:30.850171 jq[1611]: true Jan 14 01:42:30.854347 coreos-metadata[1564]: Jan 14 01:42:30.854 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 14 01:42:30.872641 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 01:42:30.872955 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 01:42:30.883689 systemd[1]: Started update-engine.service - Update Engine. Jan 14 01:42:30.885776 update_engine[1578]: I20260114 01:42:30.883750 1578 update_check_scheduler.cc:74] Next update check in 4m30s Jan 14 01:42:30.927747 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 01:42:30.974929 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 01:42:30.975000 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 01:42:30.975466 systemd-logind[1577]: New seat seat0. Jan 14 01:42:30.977810 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 01:42:31.026669 bash[1638]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:42:31.030178 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 01:42:31.032075 systemd-networkd[1502]: eth0: DHCPv4 address 172.239.193.229/24, gateway 172.239.193.1 acquired from 23.205.167.145 Jan 14 01:42:31.032547 dbus-daemon[1565]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1502 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 14 01:42:31.042887 systemd-timesyncd[1529]: Network configuration changed, trying to establish connection. Jan 14 01:42:31.043402 systemd[1]: Starting sshkeys.service... Jan 14 01:42:31.049178 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 14 01:42:31.091841 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 01:42:31.095280 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 01:42:31.161258 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Jan 14 01:42:31.171702 extend-filesystems[1617]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 14 01:42:31.171702 extend-filesystems[1617]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 14 01:42:31.171702 extend-filesystems[1617]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Jan 14 01:42:31.179638 extend-filesystems[1568]: Resized filesystem in /dev/sda9 Jan 14 01:42:31.173560 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 01:42:31.173879 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 01:42:31.198061 sshd_keygen[1609]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 01:42:31.222606 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 01:42:31.225978 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 01:42:31.256649 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 14 01:42:31.258751 coreos-metadata[1647]: Jan 14 01:42:31.258 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 14 01:42:31.264683 dbus-daemon[1565]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 14 01:42:31.278534 dbus-daemon[1565]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1645 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 14 01:42:31.292014 systemd[1]: Starting polkit.service - Authorization Manager... Jan 14 01:42:31.300712 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 01:42:31.301556 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 01:42:31.308533 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 01:42:31.340202 containerd[1600]: time="2026-01-14T01:42:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 01:42:31.346608 containerd[1600]: time="2026-01-14T01:42:31.344946016Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 01:42:31.353693 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 01:42:31.357475 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 01:42:31.361412 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 01:42:31.362445 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 01:42:31.388282 containerd[1600]: time="2026-01-14T01:42:31.386924675Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.52µs" Jan 14 01:42:31.388282 containerd[1600]: time="2026-01-14T01:42:31.386960255Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 01:42:31.388282 containerd[1600]: time="2026-01-14T01:42:31.387002825Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 01:42:31.388282 containerd[1600]: time="2026-01-14T01:42:31.387015095Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 01:42:31.388282 containerd[1600]: time="2026-01-14T01:42:31.387193645Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 01:42:31.388282 containerd[1600]: time="2026-01-14T01:42:31.387213945Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:42:31.389455 containerd[1600]: time="2026-01-14T01:42:31.388502914Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:42:31.390271 containerd[1600]: time="2026-01-14T01:42:31.389518354Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:42:31.390271 containerd[1600]: time="2026-01-14T01:42:31.389748014Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:42:31.390271 containerd[1600]: time="2026-01-14T01:42:31.389761544Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:42:31.390271 containerd[1600]: time="2026-01-14T01:42:31.389773584Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:42:31.390271 containerd[1600]: time="2026-01-14T01:42:31.389781654Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:42:31.390271 containerd[1600]: time="2026-01-14T01:42:31.389976274Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:42:31.390271 containerd[1600]: time="2026-01-14T01:42:31.389988784Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 01:42:31.390271 containerd[1600]: time="2026-01-14T01:42:31.390077734Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 01:42:31.392341 containerd[1600]: time="2026-01-14T01:42:31.391926523Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:42:31.392341 containerd[1600]: time="2026-01-14T01:42:31.391983423Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:42:31.392341 containerd[1600]: time="2026-01-14T01:42:31.391994173Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 01:42:31.392341 containerd[1600]: time="2026-01-14T01:42:31.392016193Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 01:42:31.392341 containerd[1600]: time="2026-01-14T01:42:31.392301352Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 01:42:31.392534 containerd[1600]: time="2026-01-14T01:42:31.392506442Z" level=info msg="metadata content store policy set" policy=shared Jan 14 01:42:31.393281 coreos-metadata[1647]: Jan 14 01:42:31.393 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jan 14 01:42:31.396363 containerd[1600]: time="2026-01-14T01:42:31.396342660Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 01:42:31.396454 containerd[1600]: time="2026-01-14T01:42:31.396439180Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:42:31.397113 containerd[1600]: time="2026-01-14T01:42:31.397093980Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:42:31.397174 containerd[1600]: time="2026-01-14T01:42:31.397161810Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 01:42:31.397222 containerd[1600]: time="2026-01-14T01:42:31.397211070Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 01:42:31.397302 containerd[1600]: time="2026-01-14T01:42:31.397286800Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 01:42:31.397349 containerd[1600]: time="2026-01-14T01:42:31.397338300Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 01:42:31.397390 containerd[1600]: time="2026-01-14T01:42:31.397380210Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 01:42:31.397432 containerd[1600]: time="2026-01-14T01:42:31.397421820Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 01:42:31.397473 containerd[1600]: time="2026-01-14T01:42:31.397463140Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 01:42:31.397519 containerd[1600]: time="2026-01-14T01:42:31.397507550Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 01:42:31.397561 containerd[1600]: time="2026-01-14T01:42:31.397550560Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 01:42:31.397630 containerd[1600]: time="2026-01-14T01:42:31.397615780Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 01:42:31.397757 containerd[1600]: time="2026-01-14T01:42:31.397742120Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 01:42:31.397899 containerd[1600]: time="2026-01-14T01:42:31.397883250Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 01:42:31.397956 containerd[1600]: time="2026-01-14T01:42:31.397943830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 01:42:31.398010 containerd[1600]: time="2026-01-14T01:42:31.397998710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 01:42:31.398289 containerd[1600]: time="2026-01-14T01:42:31.398273059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 01:42:31.398342 containerd[1600]: time="2026-01-14T01:42:31.398330799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 01:42:31.398395 containerd[1600]: time="2026-01-14T01:42:31.398384259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 01:42:31.398440 containerd[1600]: time="2026-01-14T01:42:31.398429289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 01:42:31.398482 containerd[1600]: time="2026-01-14T01:42:31.398472129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 01:42:31.398539 containerd[1600]: time="2026-01-14T01:42:31.398527299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 01:42:31.398582 containerd[1600]: time="2026-01-14T01:42:31.398572139Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 01:42:31.398622 containerd[1600]: time="2026-01-14T01:42:31.398612349Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 01:42:31.398674 containerd[1600]: time="2026-01-14T01:42:31.398663769Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 01:42:31.398745 containerd[1600]: time="2026-01-14T01:42:31.398732699Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 01:42:31.399258 containerd[1600]: time="2026-01-14T01:42:31.398996779Z" level=info msg="Start snapshots syncer" Jan 14 01:42:31.399258 containerd[1600]: time="2026-01-14T01:42:31.399195429Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 01:42:31.399992 containerd[1600]: time="2026-01-14T01:42:31.399955509Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 01:42:31.400156 containerd[1600]: time="2026-01-14T01:42:31.400139809Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 01:42:31.400895 containerd[1600]: time="2026-01-14T01:42:31.400236778Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 01:42:31.400895 containerd[1600]: time="2026-01-14T01:42:31.400767098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 01:42:31.400895 containerd[1600]: time="2026-01-14T01:42:31.400787328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 01:42:31.400895 containerd[1600]: time="2026-01-14T01:42:31.400798128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 01:42:31.400895 containerd[1600]: time="2026-01-14T01:42:31.400807048Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 01:42:31.400895 containerd[1600]: time="2026-01-14T01:42:31.400817598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 01:42:31.400895 containerd[1600]: time="2026-01-14T01:42:31.400827798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 01:42:31.400895 containerd[1600]: time="2026-01-14T01:42:31.400838778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 01:42:31.400895 containerd[1600]: time="2026-01-14T01:42:31.400847858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 01:42:31.400895 containerd[1600]: time="2026-01-14T01:42:31.400866038Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 01:42:31.400986 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 01:42:31.401996 containerd[1600]: time="2026-01-14T01:42:31.401692668Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:42:31.401996 containerd[1600]: time="2026-01-14T01:42:31.401715328Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:42:31.401996 containerd[1600]: time="2026-01-14T01:42:31.401723828Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:42:31.401996 containerd[1600]: time="2026-01-14T01:42:31.401804168Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:42:31.401996 containerd[1600]: time="2026-01-14T01:42:31.401812418Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 01:42:31.401996 containerd[1600]: time="2026-01-14T01:42:31.401823598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 01:42:31.401996 containerd[1600]: time="2026-01-14T01:42:31.401833338Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 01:42:31.401996 containerd[1600]: time="2026-01-14T01:42:31.401843888Z" level=info msg="runtime interface created" Jan 14 01:42:31.401996 containerd[1600]: time="2026-01-14T01:42:31.401848958Z" level=info msg="created NRI interface" Jan 14 01:42:31.401996 containerd[1600]: time="2026-01-14T01:42:31.401856758Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 01:42:31.401996 containerd[1600]: time="2026-01-14T01:42:31.401869388Z" level=info msg="Connect containerd service" Jan 14 01:42:31.401996 containerd[1600]: time="2026-01-14T01:42:31.401887218Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 01:42:31.404168 containerd[1600]: time="2026-01-14T01:42:31.403583237Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:42:31.464599 polkitd[1663]: Started polkitd version 126 Jan 14 01:42:31.472426 polkitd[1663]: Loading rules from directory /etc/polkit-1/rules.d Jan 14 01:42:31.472969 polkitd[1663]: Loading rules from directory /run/polkit-1/rules.d Jan 14 01:42:31.473060 polkitd[1663]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 01:42:31.474102 polkitd[1663]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 14 01:42:31.474133 polkitd[1663]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 01:42:31.474168 polkitd[1663]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 14 01:42:31.476132 polkitd[1663]: Finished loading, compiling and executing 2 rules Jan 14 01:42:31.476879 systemd[1]: Started polkit.service - Authorization Manager. Jan 14 01:42:31.479649 dbus-daemon[1565]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 14 01:42:31.480537 polkitd[1663]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 14 01:42:31.499704 systemd-hostnamed[1645]: Hostname set to <172-239-193-229> (transient) Jan 14 01:42:31.499836 systemd-resolved[1271]: System hostname changed to '172-239-193-229'. Jan 14 01:42:31.517722 containerd[1600]: time="2026-01-14T01:42:31.517687690Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 01:42:31.517785 containerd[1600]: time="2026-01-14T01:42:31.517759190Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 01:42:31.517785 containerd[1600]: time="2026-01-14T01:42:31.517779930Z" level=info msg="Start subscribing containerd event" Jan 14 01:42:31.517822 containerd[1600]: time="2026-01-14T01:42:31.517800530Z" level=info msg="Start recovering state" Jan 14 01:42:31.517896 containerd[1600]: time="2026-01-14T01:42:31.517878980Z" level=info msg="Start event monitor" Jan 14 01:42:31.517916 containerd[1600]: time="2026-01-14T01:42:31.517896060Z" level=info msg="Start cni network conf syncer for default" Jan 14 01:42:31.517916 containerd[1600]: time="2026-01-14T01:42:31.517903480Z" level=info msg="Start streaming server" Jan 14 01:42:31.517916 containerd[1600]: time="2026-01-14T01:42:31.517910460Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 01:42:31.517976 containerd[1600]: time="2026-01-14T01:42:31.517916960Z" level=info msg="runtime interface starting up..." Jan 14 01:42:31.517976 containerd[1600]: time="2026-01-14T01:42:31.517923460Z" level=info msg="starting plugins..." Jan 14 01:42:31.517976 containerd[1600]: time="2026-01-14T01:42:31.517936820Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 01:42:31.518193 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 01:42:31.520933 containerd[1600]: time="2026-01-14T01:42:31.520911028Z" level=info msg="containerd successfully booted in 0.181379s" Jan 14 01:42:31.527935 coreos-metadata[1647]: Jan 14 01:42:31.527 INFO Fetch successful Jan 14 01:42:31.550498 update-ssh-keys[1698]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:42:31.551781 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 01:42:31.556644 systemd[1]: Finished sshkeys.service. Jan 14 01:42:31.560146 tar[1588]: linux-amd64/README.md Jan 14 01:42:31.575638 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 01:42:31.847482 systemd-networkd[1502]: eth0: Gained IPv6LL Jan 14 01:42:31.848198 systemd-timesyncd[1529]: Network configuration changed, trying to establish connection. Jan 14 01:42:31.851041 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 01:42:31.852329 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 01:42:31.856402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:42:31.860507 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 01:42:31.864682 coreos-metadata[1564]: Jan 14 01:42:31.864 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 14 01:42:31.893458 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 01:42:31.952629 coreos-metadata[1564]: Jan 14 01:42:31.952 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jan 14 01:42:32.134184 coreos-metadata[1564]: Jan 14 01:42:32.133 INFO Fetch successful Jan 14 01:42:32.134184 coreos-metadata[1564]: Jan 14 01:42:32.133 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jan 14 01:42:32.392298 coreos-metadata[1564]: Jan 14 01:42:32.391 INFO Fetch successful Jan 14 01:42:32.506266 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 01:42:32.507533 systemd-timesyncd[1529]: Network configuration changed, trying to establish connection. Jan 14 01:42:32.509089 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 01:42:32.747512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:42:32.749532 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 01:42:32.752362 systemd[1]: Startup finished in 2.946s (kernel) + 5.704s (initrd) + 5.386s (userspace) = 14.038s. Jan 14 01:42:32.756902 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:42:33.327479 kubelet[1743]: E0114 01:42:33.327411 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:42:33.331878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:42:33.332088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:42:33.332871 systemd[1]: kubelet.service: Consumed 889ms CPU time, 266.9M memory peak. Jan 14 01:42:33.895948 systemd-timesyncd[1529]: Network configuration changed, trying to establish connection. Jan 14 01:42:35.029336 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 01:42:35.030539 systemd[1]: Started sshd@0-172.239.193.229:22-20.161.92.111:37336.service - OpenSSH per-connection server daemon (20.161.92.111:37336). Jan 14 01:42:35.196008 sshd[1755]: Accepted publickey for core from 20.161.92.111 port 37336 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:42:35.197739 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:42:35.206118 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 01:42:35.207556 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 01:42:35.213867 systemd-logind[1577]: New session 1 of user core. Jan 14 01:42:35.227469 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 01:42:35.230715 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 01:42:35.245341 (systemd)[1761]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:42:35.248613 systemd-logind[1577]: New session 2 of user core. Jan 14 01:42:35.367794 systemd[1761]: Queued start job for default target default.target. Jan 14 01:42:35.374542 systemd[1761]: Created slice app.slice - User Application Slice. Jan 14 01:42:35.374575 systemd[1761]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 01:42:35.374589 systemd[1761]: Reached target paths.target - Paths. Jan 14 01:42:35.374640 systemd[1761]: Reached target timers.target - Timers. Jan 14 01:42:35.376627 systemd[1761]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 01:42:35.379381 systemd[1761]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 01:42:35.391087 systemd[1761]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 01:42:35.392733 systemd[1761]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 01:42:35.392906 systemd[1761]: Reached target sockets.target - Sockets. Jan 14 01:42:35.393057 systemd[1761]: Reached target basic.target - Basic System. Jan 14 01:42:35.393299 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 01:42:35.394342 systemd[1761]: Reached target default.target - Main User Target. Jan 14 01:42:35.394388 systemd[1761]: Startup finished in 140ms. Jan 14 01:42:35.397413 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 01:42:35.485032 systemd[1]: Started sshd@1-172.239.193.229:22-20.161.92.111:37340.service - OpenSSH per-connection server daemon (20.161.92.111:37340). Jan 14 01:42:35.646239 sshd[1775]: Accepted publickey for core from 20.161.92.111 port 37340 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:42:35.648372 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:42:35.654531 systemd-logind[1577]: New session 3 of user core. Jan 14 01:42:35.659414 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 01:42:35.720132 sshd[1779]: Connection closed by 20.161.92.111 port 37340 Jan 14 01:42:35.722313 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jan 14 01:42:35.726021 systemd[1]: sshd@1-172.239.193.229:22-20.161.92.111:37340.service: Deactivated successfully. Jan 14 01:42:35.728583 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 01:42:35.731079 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Jan 14 01:42:35.732314 systemd-logind[1577]: Removed session 3. Jan 14 01:42:35.748741 systemd[1]: Started sshd@2-172.239.193.229:22-20.161.92.111:37354.service - OpenSSH per-connection server daemon (20.161.92.111:37354). Jan 14 01:42:35.902422 sshd[1785]: Accepted publickey for core from 20.161.92.111 port 37354 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:42:35.903974 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:42:35.910326 systemd-logind[1577]: New session 4 of user core. Jan 14 01:42:35.915393 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 01:42:35.966528 sshd[1789]: Connection closed by 20.161.92.111 port 37354 Jan 14 01:42:35.968410 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jan 14 01:42:35.972607 systemd[1]: sshd@2-172.239.193.229:22-20.161.92.111:37354.service: Deactivated successfully. Jan 14 01:42:35.974863 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 01:42:35.977459 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Jan 14 01:42:35.978749 systemd-logind[1577]: Removed session 4. Jan 14 01:42:36.003767 systemd[1]: Started sshd@3-172.239.193.229:22-20.161.92.111:37356.service - OpenSSH per-connection server daemon (20.161.92.111:37356). Jan 14 01:42:36.154657 sshd[1795]: Accepted publickey for core from 20.161.92.111 port 37356 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:42:36.156814 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:42:36.162769 systemd-logind[1577]: New session 5 of user core. Jan 14 01:42:36.169399 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:42:36.224984 sshd[1799]: Connection closed by 20.161.92.111 port 37356 Jan 14 01:42:36.226415 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 14 01:42:36.231540 systemd[1]: sshd@3-172.239.193.229:22-20.161.92.111:37356.service: Deactivated successfully. Jan 14 01:42:36.233558 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:42:36.234939 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:42:36.235990 systemd-logind[1577]: Removed session 5. Jan 14 01:42:36.257672 systemd[1]: Started sshd@4-172.239.193.229:22-20.161.92.111:37372.service - OpenSSH per-connection server daemon (20.161.92.111:37372). Jan 14 01:42:36.418344 sshd[1805]: Accepted publickey for core from 20.161.92.111 port 37372 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:42:36.420094 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:42:36.426101 systemd-logind[1577]: New session 6 of user core. Jan 14 01:42:36.431413 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:42:36.478218 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:42:36.478641 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:42:36.491731 sudo[1810]: pam_unix(sudo:session): session closed for user root Jan 14 01:42:36.512822 sshd[1809]: Connection closed by 20.161.92.111 port 37372 Jan 14 01:42:36.513485 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Jan 14 01:42:36.519082 systemd[1]: sshd@4-172.239.193.229:22-20.161.92.111:37372.service: Deactivated successfully. Jan 14 01:42:36.521403 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:42:36.522185 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:42:36.523992 systemd-logind[1577]: Removed session 6. Jan 14 01:42:36.545810 systemd[1]: Started sshd@5-172.239.193.229:22-20.161.92.111:37384.service - OpenSSH per-connection server daemon (20.161.92.111:37384). Jan 14 01:42:36.714358 sshd[1817]: Accepted publickey for core from 20.161.92.111 port 37384 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:42:36.715886 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:42:36.721199 systemd-logind[1577]: New session 7 of user core. Jan 14 01:42:36.728399 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:42:36.770673 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:42:36.771126 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:42:36.773624 sudo[1823]: pam_unix(sudo:session): session closed for user root Jan 14 01:42:36.781236 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:42:36.781828 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:42:36.790998 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:42:36.833000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:42:36.834039 augenrules[1847]: No rules Jan 14 01:42:36.834466 kernel: kauditd_printk_skb: 45 callbacks suppressed Jan 14 01:42:36.834495 kernel: audit: type=1305 audit(1768354956.833:235): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:42:36.838859 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:42:36.839287 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:42:36.833000 audit[1847]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5ac00270 a2=420 a3=0 items=0 ppid=1828 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:36.843212 kernel: audit: type=1300 audit(1768354956.833:235): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5ac00270 a2=420 a3=0 items=0 ppid=1828 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:36.841792 sudo[1822]: pam_unix(sudo:session): session closed for user root Jan 14 01:42:36.833000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:42:36.847874 kernel: audit: type=1327 audit(1768354956.833:235): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:42:36.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:36.851701 kernel: audit: type=1130 audit(1768354956.839:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:36.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:36.857121 kernel: audit: type=1131 audit(1768354956.839:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:36.841000 audit[1822]: USER_END pid=1822 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:42:36.864576 kernel: audit: type=1106 audit(1768354956.841:238): pid=1822 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:42:36.867263 sshd[1821]: Connection closed by 20.161.92.111 port 37384 Jan 14 01:42:36.867606 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Jan 14 01:42:36.841000 audit[1822]: CRED_DISP pid=1822 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:42:36.870623 kernel: audit: type=1104 audit(1768354956.841:239): pid=1822 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:42:36.869000 audit[1817]: USER_END pid=1817 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:42:36.872979 systemd[1]: sshd@5-172.239.193.229:22-20.161.92.111:37384.service: Deactivated successfully. Jan 14 01:42:36.874283 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:42:36.875938 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:42:36.876651 kernel: audit: type=1106 audit(1768354956.869:240): pid=1817 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:42:36.879170 systemd-logind[1577]: Removed session 7. Jan 14 01:42:36.869000 audit[1817]: CRED_DISP pid=1817 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:42:36.884539 kernel: audit: type=1104 audit(1768354956.869:241): pid=1817 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:42:36.895887 kernel: audit: type=1131 audit(1768354956.870:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.239.193.229:22-20.161.92.111:37384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:36.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.239.193.229:22-20.161.92.111:37384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:36.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.239.193.229:22-20.161.92.111:37388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:36.902804 systemd[1]: Started sshd@6-172.239.193.229:22-20.161.92.111:37388.service - OpenSSH per-connection server daemon (20.161.92.111:37388). Jan 14 01:42:37.048000 audit[1858]: USER_ACCT pid=1858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:42:37.048755 sshd[1858]: Accepted publickey for core from 20.161.92.111 port 37388 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:42:37.049000 audit[1858]: CRED_ACQ pid=1858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:42:37.050000 audit[1858]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda0dfcba0 a2=3 a3=0 items=0 ppid=1 pid=1858 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.050000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:42:37.050891 sshd-session[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:42:37.056318 systemd-logind[1577]: New session 8 of user core. Jan 14 01:42:37.062376 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:42:37.064000 audit[1858]: USER_START pid=1858 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:42:37.066000 audit[1862]: CRED_ACQ pid=1862 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:42:37.097000 audit[1863]: USER_ACCT pid=1863 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:42:37.098292 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:42:37.098000 audit[1863]: CRED_REFR pid=1863 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:42:37.098811 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:42:37.098000 audit[1863]: USER_START pid=1863 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:42:37.455147 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:42:37.473595 (dockerd)[1881]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:42:37.735393 dockerd[1881]: time="2026-01-14T01:42:37.734906360Z" level=info msg="Starting up" Jan 14 01:42:37.736476 dockerd[1881]: time="2026-01-14T01:42:37.736444500Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:42:37.747200 dockerd[1881]: time="2026-01-14T01:42:37.747100264Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:42:37.763994 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1331793749-merged.mount: Deactivated successfully. Jan 14 01:42:37.794606 dockerd[1881]: time="2026-01-14T01:42:37.794554471Z" level=info msg="Loading containers: start." Jan 14 01:42:37.804283 kernel: Initializing XFRM netlink socket Jan 14 01:42:37.861000 audit[1931]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.861000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd15825380 a2=0 a3=0 items=0 ppid=1881 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.861000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:42:37.863000 audit[1933]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.863000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc3e38b2c0 a2=0 a3=0 items=0 ppid=1881 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.863000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:42:37.866000 audit[1935]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.866000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9710aa20 a2=0 a3=0 items=0 ppid=1881 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.866000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:42:37.869000 audit[1937]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.869000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9db5d800 a2=0 a3=0 items=0 ppid=1881 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.869000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:42:37.872000 audit[1939]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.872000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd95db5320 a2=0 a3=0 items=0 ppid=1881 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.872000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:42:37.874000 audit[1941]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.874000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd8c25d100 a2=0 a3=0 items=0 ppid=1881 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.874000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:42:37.876000 audit[1943]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.876000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd76131210 a2=0 a3=0 items=0 ppid=1881 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.876000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:42:37.879000 audit[1945]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.879000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff5c7309e0 a2=0 a3=0 items=0 ppid=1881 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.879000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:42:37.905000 audit[1948]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.905000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff0917a900 a2=0 a3=0 items=0 ppid=1881 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.905000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:42:37.907000 audit[1950]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.907000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdde6fc2e0 a2=0 a3=0 items=0 ppid=1881 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.907000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:42:37.913000 audit[1952]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.913000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffff8e64d20 a2=0 a3=0 items=0 ppid=1881 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.913000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:42:37.915000 audit[1954]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.915000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc74728180 a2=0 a3=0 items=0 ppid=1881 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.915000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:42:37.918000 audit[1956]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.918000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe9a63b370 a2=0 a3=0 items=0 ppid=1881 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.918000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:42:37.961000 audit[1986]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.961000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffed16583d0 a2=0 a3=0 items=0 ppid=1881 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.961000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:42:37.963000 audit[1988]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.963000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc23bc5500 a2=0 a3=0 items=0 ppid=1881 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:42:37.965000 audit[1990]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.965000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe816e2300 a2=0 a3=0 items=0 ppid=1881 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:42:37.967000 audit[1992]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.967000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc00b9a40 a2=0 a3=0 items=0 ppid=1881 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.967000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:42:37.969000 audit[1994]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.969000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd8055ce50 a2=0 a3=0 items=0 ppid=1881 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.969000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:42:37.972000 audit[1996]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.972000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdd151f800 a2=0 a3=0 items=0 ppid=1881 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.972000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:42:37.974000 audit[1998]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.974000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcb7f1f810 a2=0 a3=0 items=0 ppid=1881 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.974000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:42:37.976000 audit[2000]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.976000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdd344b110 a2=0 a3=0 items=0 ppid=1881 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:42:37.979000 audit[2002]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.979000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff9f72d000 a2=0 a3=0 items=0 ppid=1881 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.979000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:42:37.981000 audit[2004]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.981000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffec0817240 a2=0 a3=0 items=0 ppid=1881 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.981000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:42:37.983000 audit[2006]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.983000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff80843d20 a2=0 a3=0 items=0 ppid=1881 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.983000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:42:37.985000 audit[2008]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.985000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe20e968f0 a2=0 a3=0 items=0 ppid=1881 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.985000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:42:37.989000 audit[2010]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:37.989000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcfff1f180 a2=0 a3=0 items=0 ppid=1881 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.989000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:42:37.995000 audit[2015]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.995000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe0e9362e0 a2=0 a3=0 items=0 ppid=1881 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:42:37.997000 audit[2017]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:37.997000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffff968f520 a2=0 a3=0 items=0 ppid=1881 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:37.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:42:38.000000 audit[2019]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:38.000000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff885e0cf0 a2=0 a3=0 items=0 ppid=1881 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:38.000000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:42:38.002000 audit[2021]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:38.002000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffaf94d2f0 a2=0 a3=0 items=0 ppid=1881 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:38.002000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:42:38.004000 audit[2023]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:38.004000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdb0a22ed0 a2=0 a3=0 items=0 ppid=1881 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:38.004000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:42:38.007000 audit[2025]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:38.007000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc2f4ee270 a2=0 a3=0 items=0 ppid=1881 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:38.007000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:42:38.016983 systemd-timesyncd[1529]: Network configuration changed, trying to establish connection. Jan 14 01:42:38.027051 systemd-timesyncd[1529]: Network configuration changed, trying to establish connection. Jan 14 01:42:38.029000 audit[2031]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:38.029000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffea2b03a50 a2=0 a3=0 items=0 ppid=1881 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:38.029000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:42:38.031000 audit[2033]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:38.031000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd4b990d50 a2=0 a3=0 items=0 ppid=1881 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:38.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:42:38.041000 audit[2041]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:38.041000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffca1924900 a2=0 a3=0 items=0 ppid=1881 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:38.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:42:38.051000 audit[2047]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:38.051000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd493ebc80 a2=0 a3=0 items=0 ppid=1881 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:38.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:42:38.054000 audit[2049]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:38.054000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc6f370470 a2=0 a3=0 items=0 ppid=1881 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:38.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:42:38.056000 audit[2051]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:38.056000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffce9b626a0 a2=0 a3=0 items=0 ppid=1881 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:38.056000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:42:38.059000 audit[2053]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:38.059000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffccd1b9f90 a2=0 a3=0 items=0 ppid=1881 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:38.059000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:42:38.061000 audit[2055]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:38.061000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe524dbda0 a2=0 a3=0 items=0 ppid=1881 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:38.061000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:42:38.062356 systemd-networkd[1502]: docker0: Link UP Jan 14 01:42:38.062646 systemd-timesyncd[1529]: Network configuration changed, trying to establish connection. Jan 14 01:42:38.064998 dockerd[1881]: time="2026-01-14T01:42:38.064975795Z" level=info msg="Loading containers: done." Jan 14 01:42:38.078056 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1987543027-merged.mount: Deactivated successfully. Jan 14 01:42:38.084575 dockerd[1881]: time="2026-01-14T01:42:38.084542206Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:42:38.084691 dockerd[1881]: time="2026-01-14T01:42:38.084606825Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:42:38.084691 dockerd[1881]: time="2026-01-14T01:42:38.084682835Z" level=info msg="Initializing buildkit" Jan 14 01:42:38.103905 dockerd[1881]: time="2026-01-14T01:42:38.103871566Z" level=info msg="Completed buildkit initialization" Jan 14 01:42:38.110482 dockerd[1881]: time="2026-01-14T01:42:38.110457503Z" level=info msg="Daemon has completed initialization" Jan 14 01:42:38.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:38.111012 dockerd[1881]: time="2026-01-14T01:42:38.110515723Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:42:38.110727 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:42:38.689300 containerd[1600]: time="2026-01-14T01:42:38.689197933Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 01:42:39.460865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4131302150.mount: Deactivated successfully. Jan 14 01:42:40.485358 containerd[1600]: time="2026-01-14T01:42:40.485230575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:40.486950 containerd[1600]: time="2026-01-14T01:42:40.486628674Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Jan 14 01:42:40.487560 containerd[1600]: time="2026-01-14T01:42:40.487526024Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:40.491419 containerd[1600]: time="2026-01-14T01:42:40.491375462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:40.492314 containerd[1600]: time="2026-01-14T01:42:40.492279631Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.803039468s" Jan 14 01:42:40.492314 containerd[1600]: time="2026-01-14T01:42:40.492315431Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 14 01:42:40.493160 containerd[1600]: time="2026-01-14T01:42:40.493134881Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 01:42:42.041472 containerd[1600]: time="2026-01-14T01:42:42.041394747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:42.042969 containerd[1600]: time="2026-01-14T01:42:42.042635966Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 14 01:42:42.043564 containerd[1600]: time="2026-01-14T01:42:42.043531076Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:42.046066 containerd[1600]: time="2026-01-14T01:42:42.046034564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:42.047104 containerd[1600]: time="2026-01-14T01:42:42.047053844Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.553891413s" Jan 14 01:42:42.047161 containerd[1600]: time="2026-01-14T01:42:42.047107304Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 14 01:42:42.048319 containerd[1600]: time="2026-01-14T01:42:42.048209143Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 01:42:43.387788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 01:42:43.390716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:42:43.403314 containerd[1600]: time="2026-01-14T01:42:43.403289816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:43.404543 containerd[1600]: time="2026-01-14T01:42:43.404522725Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 14 01:42:43.405662 containerd[1600]: time="2026-01-14T01:42:43.405603974Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:43.407921 containerd[1600]: time="2026-01-14T01:42:43.407883833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:43.408863 containerd[1600]: time="2026-01-14T01:42:43.408560963Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.36032015s" Jan 14 01:42:43.408863 containerd[1600]: time="2026-01-14T01:42:43.408585233Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 14 01:42:43.409368 containerd[1600]: time="2026-01-14T01:42:43.409346112Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 01:42:43.572731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:42:43.576483 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 14 01:42:43.576551 kernel: audit: type=1130 audit(1768354963.573:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:43.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:43.587520 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:42:43.620956 kubelet[2168]: E0114 01:42:43.620905 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:42:43.625851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:42:43.626039 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:42:43.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:42:43.626522 systemd[1]: kubelet.service: Consumed 188ms CPU time, 109.8M memory peak. Jan 14 01:42:43.632277 kernel: audit: type=1131 audit(1768354963.626:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:42:44.609749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3656493388.mount: Deactivated successfully. Jan 14 01:42:44.998289 containerd[1600]: time="2026-01-14T01:42:44.998189218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:45.002265 containerd[1600]: time="2026-01-14T01:42:45.000288807Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 14 01:42:45.002265 containerd[1600]: time="2026-01-14T01:42:45.000457227Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:45.003561 containerd[1600]: time="2026-01-14T01:42:45.003529215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:45.005209 containerd[1600]: time="2026-01-14T01:42:45.005174714Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.595799452s" Jan 14 01:42:45.005285 containerd[1600]: time="2026-01-14T01:42:45.005211184Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 14 01:42:45.005814 containerd[1600]: time="2026-01-14T01:42:45.005761994Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 01:42:45.664624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3454980414.mount: Deactivated successfully. Jan 14 01:42:46.350040 containerd[1600]: time="2026-01-14T01:42:46.348788092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:46.350040 containerd[1600]: time="2026-01-14T01:42:46.349922672Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Jan 14 01:42:46.350040 containerd[1600]: time="2026-01-14T01:42:46.349997532Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:46.352352 containerd[1600]: time="2026-01-14T01:42:46.352332621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:46.353271 containerd[1600]: time="2026-01-14T01:42:46.353233520Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.347428496s" Jan 14 01:42:46.353762 containerd[1600]: time="2026-01-14T01:42:46.353729790Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 14 01:42:46.356818 containerd[1600]: time="2026-01-14T01:42:46.356786998Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 01:42:47.012606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4093885527.mount: Deactivated successfully. Jan 14 01:42:47.015926 containerd[1600]: time="2026-01-14T01:42:47.015893859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:42:47.016509 containerd[1600]: time="2026-01-14T01:42:47.016488148Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:42:47.017914 containerd[1600]: time="2026-01-14T01:42:47.016996668Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:42:47.018391 containerd[1600]: time="2026-01-14T01:42:47.018366108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:42:47.019050 containerd[1600]: time="2026-01-14T01:42:47.019030087Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 662.145819ms" Jan 14 01:42:47.019126 containerd[1600]: time="2026-01-14T01:42:47.019111887Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 01:42:47.019605 containerd[1600]: time="2026-01-14T01:42:47.019579187Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 01:42:47.743089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855795578.mount: Deactivated successfully. Jan 14 01:42:49.531204 containerd[1600]: time="2026-01-14T01:42:49.531103081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:49.532881 containerd[1600]: time="2026-01-14T01:42:49.532844890Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Jan 14 01:42:49.534096 containerd[1600]: time="2026-01-14T01:42:49.534018269Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:49.537439 containerd[1600]: time="2026-01-14T01:42:49.536240248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:42:49.537439 containerd[1600]: time="2026-01-14T01:42:49.537327548Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.517723351s" Jan 14 01:42:49.537439 containerd[1600]: time="2026-01-14T01:42:49.537355588Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 14 01:42:51.999031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:42:51.999615 systemd[1]: kubelet.service: Consumed 188ms CPU time, 109.8M memory peak. Jan 14 01:42:51.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:52.011134 kernel: audit: type=1130 audit(1768354971.999:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:52.011187 kernel: audit: type=1131 audit(1768354971.999:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:51.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:52.007395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:42:52.037299 systemd[1]: Reload requested from client PID 2324 ('systemctl') (unit session-8.scope)... Jan 14 01:42:52.037313 systemd[1]: Reloading... Jan 14 01:42:52.206277 zram_generator::config[2377]: No configuration found. Jan 14 01:42:52.494629 systemd[1]: Reloading finished in 456 ms. Jan 14 01:42:52.532000 audit: BPF prog-id=67 op=LOAD Jan 14 01:42:52.536670 kernel: audit: type=1334 audit(1768354972.532:297): prog-id=67 op=LOAD Jan 14 01:42:52.536743 kernel: audit: type=1334 audit(1768354972.532:298): prog-id=52 op=UNLOAD Jan 14 01:42:52.532000 audit: BPF prog-id=52 op=UNLOAD Jan 14 01:42:52.538732 kernel: audit: type=1334 audit(1768354972.532:299): prog-id=68 op=LOAD Jan 14 01:42:52.532000 audit: BPF prog-id=68 op=LOAD Jan 14 01:42:52.543938 kernel: audit: type=1334 audit(1768354972.532:300): prog-id=69 op=LOAD Jan 14 01:42:52.532000 audit: BPF prog-id=69 op=LOAD Jan 14 01:42:52.532000 audit: BPF prog-id=53 op=UNLOAD Jan 14 01:42:52.532000 audit: BPF prog-id=54 op=UNLOAD Jan 14 01:42:52.535000 audit: BPF prog-id=70 op=LOAD Jan 14 01:42:52.535000 audit: BPF prog-id=55 op=UNLOAD Jan 14 01:42:52.535000 audit: BPF prog-id=71 op=LOAD Jan 14 01:42:52.535000 audit: BPF prog-id=72 op=LOAD Jan 14 01:42:52.544263 kernel: audit: type=1334 audit(1768354972.532:301): prog-id=53 op=UNLOAD Jan 14 01:42:52.544292 kernel: audit: type=1334 audit(1768354972.532:302): prog-id=54 op=UNLOAD Jan 14 01:42:52.544306 kernel: audit: type=1334 audit(1768354972.535:303): prog-id=70 op=LOAD Jan 14 01:42:52.544325 kernel: audit: type=1334 audit(1768354972.535:304): prog-id=55 op=UNLOAD Jan 14 01:42:52.535000 audit: BPF prog-id=56 op=UNLOAD Jan 14 01:42:52.535000 audit: BPF prog-id=57 op=UNLOAD Jan 14 01:42:52.536000 audit: BPF prog-id=73 op=LOAD Jan 14 01:42:52.536000 audit: BPF prog-id=74 op=LOAD Jan 14 01:42:52.536000 audit: BPF prog-id=50 op=UNLOAD Jan 14 01:42:52.536000 audit: BPF prog-id=51 op=UNLOAD Jan 14 01:42:52.538000 audit: BPF prog-id=75 op=LOAD Jan 14 01:42:52.538000 audit: BPF prog-id=66 op=UNLOAD Jan 14 01:42:52.543000 audit: BPF prog-id=76 op=LOAD Jan 14 01:42:52.543000 audit: BPF prog-id=60 op=UNLOAD Jan 14 01:42:52.546000 audit: BPF prog-id=77 op=LOAD Jan 14 01:42:52.546000 audit: BPF prog-id=78 op=LOAD Jan 14 01:42:52.546000 audit: BPF prog-id=61 op=UNLOAD Jan 14 01:42:52.546000 audit: BPF prog-id=62 op=UNLOAD Jan 14 01:42:52.548000 audit: BPF prog-id=79 op=LOAD Jan 14 01:42:52.548000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:42:52.548000 audit: BPF prog-id=80 op=LOAD Jan 14 01:42:52.548000 audit: BPF prog-id=81 op=LOAD Jan 14 01:42:52.548000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:42:52.548000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:42:52.550000 audit: BPF prog-id=82 op=LOAD Jan 14 01:42:52.550000 audit: BPF prog-id=59 op=UNLOAD Jan 14 01:42:52.552000 audit: BPF prog-id=83 op=LOAD Jan 14 01:42:52.552000 audit: BPF prog-id=63 op=UNLOAD Jan 14 01:42:52.552000 audit: BPF prog-id=84 op=LOAD Jan 14 01:42:52.552000 audit: BPF prog-id=85 op=LOAD Jan 14 01:42:52.552000 audit: BPF prog-id=64 op=UNLOAD Jan 14 01:42:52.552000 audit: BPF prog-id=65 op=UNLOAD Jan 14 01:42:52.554000 audit: BPF prog-id=86 op=LOAD Jan 14 01:42:52.557000 audit: BPF prog-id=58 op=UNLOAD Jan 14 01:42:52.558000 audit: BPF prog-id=87 op=LOAD Jan 14 01:42:52.558000 audit: BPF prog-id=49 op=UNLOAD Jan 14 01:42:52.558000 audit: BPF prog-id=88 op=LOAD Jan 14 01:42:52.558000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:42:52.559000 audit: BPF prog-id=89 op=LOAD Jan 14 01:42:52.559000 audit: BPF prog-id=90 op=LOAD Jan 14 01:42:52.559000 audit: BPF prog-id=47 op=UNLOAD Jan 14 01:42:52.559000 audit: BPF prog-id=48 op=UNLOAD Jan 14 01:42:52.578846 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:42:52.579018 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:42:52.579447 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:42:52.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:42:52.579540 systemd[1]: kubelet.service: Consumed 139ms CPU time, 98.5M memory peak. Jan 14 01:42:52.581823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:42:52.776517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:42:52.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:52.787538 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:42:52.824930 kubelet[2426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:42:52.825200 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:42:52.825238 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:42:52.825429 kubelet[2426]: I0114 01:42:52.825405 2426 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:42:53.319607 kubelet[2426]: I0114 01:42:53.319535 2426 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:42:53.319607 kubelet[2426]: I0114 01:42:53.319563 2426 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:42:53.319906 kubelet[2426]: I0114 01:42:53.319808 2426 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:42:53.359956 kubelet[2426]: I0114 01:42:53.359576 2426 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:42:53.359956 kubelet[2426]: E0114 01:42:53.359888 2426 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.193.229:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.193.229:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:42:53.365980 kubelet[2426]: I0114 01:42:53.365950 2426 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:42:53.370087 kubelet[2426]: I0114 01:42:53.370069 2426 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:42:53.370384 kubelet[2426]: I0114 01:42:53.370348 2426 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:42:53.370526 kubelet[2426]: I0114 01:42:53.370380 2426 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-193-229","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:42:53.370693 kubelet[2426]: I0114 01:42:53.370530 2426 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:42:53.370693 kubelet[2426]: I0114 01:42:53.370538 2426 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:42:53.371397 kubelet[2426]: I0114 01:42:53.371375 2426 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:42:53.374428 kubelet[2426]: I0114 01:42:53.374216 2426 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:42:53.374428 kubelet[2426]: I0114 01:42:53.374262 2426 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:42:53.374428 kubelet[2426]: I0114 01:42:53.374287 2426 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:42:53.374428 kubelet[2426]: I0114 01:42:53.374303 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:42:53.379833 kubelet[2426]: E0114 01:42:53.379811 2426 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.193.229:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-193-229&limit=500&resourceVersion=0\": dial tcp 172.239.193.229:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:42:53.381217 kubelet[2426]: I0114 01:42:53.380559 2426 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:42:53.381217 kubelet[2426]: I0114 01:42:53.381158 2426 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:42:53.382487 kubelet[2426]: W0114 01:42:53.382456 2426 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:42:53.386504 kubelet[2426]: I0114 01:42:53.386479 2426 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:42:53.386559 kubelet[2426]: I0114 01:42:53.386523 2426 server.go:1289] "Started kubelet" Jan 14 01:42:53.394227 kubelet[2426]: I0114 01:42:53.394193 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:42:53.398835 kubelet[2426]: I0114 01:42:53.397632 2426 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:42:53.398835 kubelet[2426]: I0114 01:42:53.398602 2426 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:42:53.402671 kubelet[2426]: E0114 01:42:53.402650 2426 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.193.229:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.193.229:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:42:53.403623 kubelet[2426]: I0114 01:42:53.403023 2426 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:42:53.405042 kubelet[2426]: I0114 01:42:53.405021 2426 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:42:53.405202 kubelet[2426]: E0114 01:42:53.405179 2426 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-193-229\" not found" Jan 14 01:42:53.405875 kubelet[2426]: I0114 01:42:53.405668 2426 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:42:53.405917 kubelet[2426]: I0114 01:42:53.405908 2426 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:42:53.407000 audit[2441]: NETFILTER_CFG table=mangle:42 family=10 entries=2 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:53.407000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc7ada3ec0 a2=0 a3=0 items=0 ppid=2426 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.407000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:42:53.407983 kubelet[2426]: I0114 01:42:53.407949 2426 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:42:53.408000 audit[2442]: NETFILTER_CFG table=mangle:43 family=2 entries=2 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:53.409007 kubelet[2426]: I0114 01:42:53.408356 2426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:42:53.409007 kubelet[2426]: I0114 01:42:53.408562 2426 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:42:53.408000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff7e5742a0 a2=0 a3=0 items=0 ppid=2426 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:42:53.410505 kubelet[2426]: E0114 01:42:53.409384 2426 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.193.229:6443/api/v1/namespaces/default/events\": dial tcp 172.239.193.229:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-193-229.188a757aa97f6d6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-193-229,UID:172-239-193-229,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-193-229,},FirstTimestamp:2026-01-14 01:42:53.386501483 +0000 UTC m=+0.594127464,LastTimestamp:2026-01-14 01:42:53.386501483 +0000 UTC m=+0.594127464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-193-229,}" Jan 14 01:42:53.410000 audit[2443]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:53.410000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd067bcac0 a2=0 a3=0 items=0 ppid=2426 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:42:53.411710 kubelet[2426]: E0114 01:42:53.411687 2426 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.193.229:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.193.229:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:42:53.411798 kubelet[2426]: E0114 01:42:53.411750 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.229:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-229?timeout=10s\": dial tcp 172.239.193.229:6443: connect: connection refused" interval="200ms" Jan 14 01:42:53.411939 kubelet[2426]: I0114 01:42:53.411920 2426 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:42:53.412000 kubelet[2426]: I0114 01:42:53.411983 2426 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:42:53.413000 audit[2445]: NETFILTER_CFG table=mangle:45 family=10 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:53.413000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0c3d18a0 a2=0 a3=0 items=0 ppid=2426 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.413000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:42:53.415000 audit[2447]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:53.416000 audit[2446]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:53.415000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff21f613b0 a2=0 a3=0 items=0 ppid=2426 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:42:53.416000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc18c46690 a2=0 a3=0 items=0 ppid=2426 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:42:53.417000 audit[2448]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:42:53.417000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdbb4d4740 a2=0 a3=0 items=0 ppid=2426 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.417000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:42:53.419432 kubelet[2426]: I0114 01:42:53.419417 2426 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:42:53.420000 audit[2450]: NETFILTER_CFG table=filter:49 family=2 entries=2 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:53.420000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcda474d90 a2=0 a3=0 items=0 ppid=2426 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.420000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:42:53.428000 audit[2453]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:53.428000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffd0be3290 a2=0 a3=0 items=0 ppid=2426 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 01:42:53.428848 kubelet[2426]: I0114 01:42:53.428833 2426 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:42:53.428900 kubelet[2426]: I0114 01:42:53.428891 2426 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:42:53.428960 kubelet[2426]: I0114 01:42:53.428950 2426 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:42:53.429003 kubelet[2426]: I0114 01:42:53.428995 2426 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:42:53.429087 kubelet[2426]: E0114 01:42:53.429066 2426 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:42:53.430000 audit[2454]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:53.430000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd875f7cf0 a2=0 a3=0 items=0 ppid=2426 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:42:53.431000 audit[2456]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:53.431000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7bebef90 a2=0 a3=0 items=0 ppid=2426 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:42:53.432000 audit[2457]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:42:53.432000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdab5a8c70 a2=0 a3=0 items=0 ppid=2426 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.432000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:42:53.435070 kubelet[2426]: E0114 01:42:53.434995 2426 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.193.229:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.193.229:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:42:53.438951 kubelet[2426]: E0114 01:42:53.438935 2426 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:42:53.443052 kubelet[2426]: I0114 01:42:53.443036 2426 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:42:53.443052 kubelet[2426]: I0114 01:42:53.443048 2426 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:42:53.443162 kubelet[2426]: I0114 01:42:53.443147 2426 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:42:53.445073 kubelet[2426]: I0114 01:42:53.445052 2426 policy_none.go:49] "None policy: Start" Jan 14 01:42:53.445073 kubelet[2426]: I0114 01:42:53.445072 2426 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:42:53.445146 kubelet[2426]: I0114 01:42:53.445084 2426 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:42:53.450565 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:42:53.466318 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:42:53.470069 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:42:53.480329 kubelet[2426]: E0114 01:42:53.480308 2426 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:42:53.480600 kubelet[2426]: I0114 01:42:53.480586 2426 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:42:53.480676 kubelet[2426]: I0114 01:42:53.480646 2426 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:42:53.480970 kubelet[2426]: I0114 01:42:53.480957 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:42:53.482868 kubelet[2426]: E0114 01:42:53.482798 2426 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:42:53.482868 kubelet[2426]: E0114 01:42:53.482836 2426 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-193-229\" not found" Jan 14 01:42:53.541414 systemd[1]: Created slice kubepods-burstable-podb006bffc2533fd8e9e97a0b16e60f945.slice - libcontainer container kubepods-burstable-podb006bffc2533fd8e9e97a0b16e60f945.slice. Jan 14 01:42:53.559944 kubelet[2426]: E0114 01:42:53.559907 2426 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-229\" not found" node="172-239-193-229" Jan 14 01:42:53.564207 systemd[1]: Created slice kubepods-burstable-pod73def85ddab0dff5a16485cb358811f0.slice - libcontainer container kubepods-burstable-pod73def85ddab0dff5a16485cb358811f0.slice. Jan 14 01:42:53.567709 kubelet[2426]: E0114 01:42:53.567686 2426 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-229\" not found" node="172-239-193-229" Jan 14 01:42:53.571918 systemd[1]: Created slice kubepods-burstable-pod513df8d42a4a98d528c60075e2a30332.slice - libcontainer container kubepods-burstable-pod513df8d42a4a98d528c60075e2a30332.slice. Jan 14 01:42:53.575290 kubelet[2426]: E0114 01:42:53.575240 2426 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-229\" not found" node="172-239-193-229" Jan 14 01:42:53.582524 kubelet[2426]: I0114 01:42:53.582497 2426 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-229" Jan 14 01:42:53.582982 kubelet[2426]: E0114 01:42:53.582962 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.193.229:6443/api/v1/nodes\": dial tcp 172.239.193.229:6443: connect: connection refused" node="172-239-193-229" Jan 14 01:42:53.607891 kubelet[2426]: I0114 01:42:53.607383 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73def85ddab0dff5a16485cb358811f0-ca-certs\") pod \"kube-controller-manager-172-239-193-229\" (UID: \"73def85ddab0dff5a16485cb358811f0\") " pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:53.607891 kubelet[2426]: I0114 01:42:53.607410 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73def85ddab0dff5a16485cb358811f0-flexvolume-dir\") pod \"kube-controller-manager-172-239-193-229\" (UID: \"73def85ddab0dff5a16485cb358811f0\") " pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:53.607891 kubelet[2426]: I0114 01:42:53.607428 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73def85ddab0dff5a16485cb358811f0-kubeconfig\") pod \"kube-controller-manager-172-239-193-229\" (UID: \"73def85ddab0dff5a16485cb358811f0\") " pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:53.607891 kubelet[2426]: I0114 01:42:53.607445 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73def85ddab0dff5a16485cb358811f0-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-193-229\" (UID: \"73def85ddab0dff5a16485cb358811f0\") " pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:53.607891 kubelet[2426]: I0114 01:42:53.607461 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/513df8d42a4a98d528c60075e2a30332-kubeconfig\") pod \"kube-scheduler-172-239-193-229\" (UID: \"513df8d42a4a98d528c60075e2a30332\") " pod="kube-system/kube-scheduler-172-239-193-229" Jan 14 01:42:53.608038 kubelet[2426]: I0114 01:42:53.607476 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b006bffc2533fd8e9e97a0b16e60f945-ca-certs\") pod \"kube-apiserver-172-239-193-229\" (UID: \"b006bffc2533fd8e9e97a0b16e60f945\") " pod="kube-system/kube-apiserver-172-239-193-229" Jan 14 01:42:53.608038 kubelet[2426]: I0114 01:42:53.607828 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b006bffc2533fd8e9e97a0b16e60f945-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-193-229\" (UID: \"b006bffc2533fd8e9e97a0b16e60f945\") " pod="kube-system/kube-apiserver-172-239-193-229" Jan 14 01:42:53.608038 kubelet[2426]: I0114 01:42:53.607846 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73def85ddab0dff5a16485cb358811f0-k8s-certs\") pod \"kube-controller-manager-172-239-193-229\" (UID: \"73def85ddab0dff5a16485cb358811f0\") " pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:53.608038 kubelet[2426]: I0114 01:42:53.607860 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b006bffc2533fd8e9e97a0b16e60f945-k8s-certs\") pod \"kube-apiserver-172-239-193-229\" (UID: \"b006bffc2533fd8e9e97a0b16e60f945\") " pod="kube-system/kube-apiserver-172-239-193-229" Jan 14 01:42:53.612804 kubelet[2426]: E0114 01:42:53.612657 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.229:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-229?timeout=10s\": dial tcp 172.239.193.229:6443: connect: connection refused" interval="400ms" Jan 14 01:42:53.786377 kubelet[2426]: I0114 01:42:53.786281 2426 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-229" Jan 14 01:42:53.786842 kubelet[2426]: E0114 01:42:53.786794 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.193.229:6443/api/v1/nodes\": dial tcp 172.239.193.229:6443: connect: connection refused" node="172-239-193-229" Jan 14 01:42:53.860550 kubelet[2426]: E0114 01:42:53.860385 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:53.861515 containerd[1600]: time="2026-01-14T01:42:53.861480455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-193-229,Uid:b006bffc2533fd8e9e97a0b16e60f945,Namespace:kube-system,Attempt:0,}" Jan 14 01:42:53.868905 kubelet[2426]: E0114 01:42:53.868852 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:53.869268 containerd[1600]: time="2026-01-14T01:42:53.869221841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-193-229,Uid:73def85ddab0dff5a16485cb358811f0,Namespace:kube-system,Attempt:0,}" Jan 14 01:42:53.878270 kubelet[2426]: E0114 01:42:53.877475 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:53.889380 containerd[1600]: time="2026-01-14T01:42:53.889352841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-193-229,Uid:513df8d42a4a98d528c60075e2a30332,Namespace:kube-system,Attempt:0,}" Jan 14 01:42:53.893477 containerd[1600]: time="2026-01-14T01:42:53.893453759Z" level=info msg="connecting to shim 5457fe648f731c6bf8501a2be50e9fb175e0517cf279b3e72a0e32634000b889" address="unix:///run/containerd/s/eea230144499dd04506fa922bebc9f4e41345a4103c0d5a1ca7b2d5eedb91c7e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:42:53.893701 containerd[1600]: time="2026-01-14T01:42:53.893577069Z" level=info msg="connecting to shim 287ee7023eb63dc95dcd8f4b28c5d93229bc933ad6b6b9194263012d09fe7b79" address="unix:///run/containerd/s/13f33555b1ce5c0c52cb91d9a797ffe7356a54fdd0ea4d69bcf561f566207369" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:42:53.941410 systemd[1]: Started cri-containerd-287ee7023eb63dc95dcd8f4b28c5d93229bc933ad6b6b9194263012d09fe7b79.scope - libcontainer container 287ee7023eb63dc95dcd8f4b28c5d93229bc933ad6b6b9194263012d09fe7b79. Jan 14 01:42:53.947856 systemd[1]: Started cri-containerd-5457fe648f731c6bf8501a2be50e9fb175e0517cf279b3e72a0e32634000b889.scope - libcontainer container 5457fe648f731c6bf8501a2be50e9fb175e0517cf279b3e72a0e32634000b889. Jan 14 01:42:53.951627 containerd[1600]: time="2026-01-14T01:42:53.951372740Z" level=info msg="connecting to shim 0413820811de242f5f761dcdf92c358a978b9fde08f4099b63ea612af92e9756" address="unix:///run/containerd/s/1dc88558d541c17b709a4a98fb2426a23348dc8124fe2b878d7ab016c1b8ad1e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:42:53.963000 audit: BPF prog-id=91 op=LOAD Jan 14 01:42:53.964000 audit: BPF prog-id=92 op=LOAD Jan 14 01:42:53.964000 audit[2500]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2480 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238376565373032336562363364633935646364386634623238633564 Jan 14 01:42:53.964000 audit: BPF prog-id=92 op=UNLOAD Jan 14 01:42:53.964000 audit[2500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238376565373032336562363364633935646364386634623238633564 Jan 14 01:42:53.964000 audit: BPF prog-id=93 op=LOAD Jan 14 01:42:53.964000 audit[2500]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2480 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238376565373032336562363364633935646364386634623238633564 Jan 14 01:42:53.964000 audit: BPF prog-id=94 op=LOAD Jan 14 01:42:53.964000 audit[2500]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2480 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238376565373032336562363364633935646364386634623238633564 Jan 14 01:42:53.964000 audit: BPF prog-id=94 op=UNLOAD Jan 14 01:42:53.964000 audit[2500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238376565373032336562363364633935646364386634623238633564 Jan 14 01:42:53.964000 audit: BPF prog-id=93 op=UNLOAD Jan 14 01:42:53.964000 audit[2500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238376565373032336562363364633935646364386634623238633564 Jan 14 01:42:53.964000 audit: BPF prog-id=95 op=LOAD Jan 14 01:42:53.964000 audit[2500]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2480 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238376565373032336562363364633935646364386634623238633564 Jan 14 01:42:53.976000 audit: BPF prog-id=96 op=LOAD Jan 14 01:42:53.977000 audit: BPF prog-id=97 op=LOAD Jan 14 01:42:53.977000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2474 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534353766653634386637333163366266383530316132626535306539 Jan 14 01:42:53.977000 audit: BPF prog-id=97 op=UNLOAD Jan 14 01:42:53.977000 audit[2511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534353766653634386637333163366266383530316132626535306539 Jan 14 01:42:53.977000 audit: BPF prog-id=98 op=LOAD Jan 14 01:42:53.977000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2474 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534353766653634386637333163366266383530316132626535306539 Jan 14 01:42:53.978000 audit: BPF prog-id=99 op=LOAD Jan 14 01:42:53.978000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2474 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534353766653634386637333163366266383530316132626535306539 Jan 14 01:42:53.978000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:42:53.978000 audit[2511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534353766653634386637333163366266383530316132626535306539 Jan 14 01:42:53.978000 audit: BPF prog-id=98 op=UNLOAD Jan 14 01:42:53.978000 audit[2511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534353766653634386637333163366266383530316132626535306539 Jan 14 01:42:53.978000 audit: BPF prog-id=100 op=LOAD Jan 14 01:42:53.978000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2474 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:53.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534353766653634386637333163366266383530316132626535306539 Jan 14 01:42:53.995496 systemd[1]: Started cri-containerd-0413820811de242f5f761dcdf92c358a978b9fde08f4099b63ea612af92e9756.scope - libcontainer container 0413820811de242f5f761dcdf92c358a978b9fde08f4099b63ea612af92e9756. Jan 14 01:42:54.011370 containerd[1600]: time="2026-01-14T01:42:54.010806530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-193-229,Uid:73def85ddab0dff5a16485cb358811f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"287ee7023eb63dc95dcd8f4b28c5d93229bc933ad6b6b9194263012d09fe7b79\"" Jan 14 01:42:54.013379 kubelet[2426]: E0114 01:42:54.013353 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:54.014322 kubelet[2426]: E0114 01:42:54.013852 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.229:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-229?timeout=10s\": dial tcp 172.239.193.229:6443: connect: connection refused" interval="800ms" Jan 14 01:42:54.017327 containerd[1600]: time="2026-01-14T01:42:54.017299737Z" level=info msg="CreateContainer within sandbox \"287ee7023eb63dc95dcd8f4b28c5d93229bc933ad6b6b9194263012d09fe7b79\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:42:54.022335 containerd[1600]: time="2026-01-14T01:42:54.022308155Z" level=info msg="Container b72a27b54cd906adcb5fe171c87a94194c955fbe6139f26b472a9b1a5caa8af6: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:42:54.026743 containerd[1600]: time="2026-01-14T01:42:54.026654943Z" level=info msg="CreateContainer within sandbox \"287ee7023eb63dc95dcd8f4b28c5d93229bc933ad6b6b9194263012d09fe7b79\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b72a27b54cd906adcb5fe171c87a94194c955fbe6139f26b472a9b1a5caa8af6\"" Jan 14 01:42:54.027073 containerd[1600]: time="2026-01-14T01:42:54.027033212Z" level=info msg="StartContainer for \"b72a27b54cd906adcb5fe171c87a94194c955fbe6139f26b472a9b1a5caa8af6\"" Jan 14 01:42:54.028637 containerd[1600]: time="2026-01-14T01:42:54.028258002Z" level=info msg="connecting to shim b72a27b54cd906adcb5fe171c87a94194c955fbe6139f26b472a9b1a5caa8af6" address="unix:///run/containerd/s/13f33555b1ce5c0c52cb91d9a797ffe7356a54fdd0ea4d69bcf561f566207369" protocol=ttrpc version=3 Jan 14 01:42:54.034000 audit: BPF prog-id=101 op=LOAD Jan 14 01:42:54.035000 audit: BPF prog-id=102 op=LOAD Jan 14 01:42:54.035000 audit[2561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2535 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034313338323038313164653234326635663736316463646639326333 Jan 14 01:42:54.035000 audit: BPF prog-id=102 op=UNLOAD Jan 14 01:42:54.035000 audit[2561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034313338323038313164653234326635663736316463646639326333 Jan 14 01:42:54.035000 audit: BPF prog-id=103 op=LOAD Jan 14 01:42:54.035000 audit[2561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2535 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034313338323038313164653234326635663736316463646639326333 Jan 14 01:42:54.035000 audit: BPF prog-id=104 op=LOAD Jan 14 01:42:54.035000 audit[2561]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2535 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034313338323038313164653234326635663736316463646639326333 Jan 14 01:42:54.036000 audit: BPF prog-id=104 op=UNLOAD Jan 14 01:42:54.036000 audit[2561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034313338323038313164653234326635663736316463646639326333 Jan 14 01:42:54.036000 audit: BPF prog-id=103 op=UNLOAD Jan 14 01:42:54.036000 audit[2561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034313338323038313164653234326635663736316463646639326333 Jan 14 01:42:54.036000 audit: BPF prog-id=105 op=LOAD Jan 14 01:42:54.036000 audit[2561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2535 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034313338323038313164653234326635663736316463646639326333 Jan 14 01:42:54.058542 systemd[1]: Started cri-containerd-b72a27b54cd906adcb5fe171c87a94194c955fbe6139f26b472a9b1a5caa8af6.scope - libcontainer container b72a27b54cd906adcb5fe171c87a94194c955fbe6139f26b472a9b1a5caa8af6. Jan 14 01:42:54.061799 containerd[1600]: time="2026-01-14T01:42:54.061766635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-193-229,Uid:b006bffc2533fd8e9e97a0b16e60f945,Namespace:kube-system,Attempt:0,} returns sandbox id \"5457fe648f731c6bf8501a2be50e9fb175e0517cf279b3e72a0e32634000b889\"" Jan 14 01:42:54.062770 kubelet[2426]: E0114 01:42:54.062675 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:54.066133 containerd[1600]: time="2026-01-14T01:42:54.066084773Z" level=info msg="CreateContainer within sandbox \"5457fe648f731c6bf8501a2be50e9fb175e0517cf279b3e72a0e32634000b889\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:42:54.075623 containerd[1600]: time="2026-01-14T01:42:54.075593038Z" level=info msg="Container ec3c0959bd0209da61b9d102c4eb05312dd6fc753651d9ff720dd000a3b87790: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:42:54.086000 audit: BPF prog-id=106 op=LOAD Jan 14 01:42:54.086000 audit: BPF prog-id=107 op=LOAD Jan 14 01:42:54.086000 audit[2586]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2480 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237326132376235346364393036616463623566653137316338376139 Jan 14 01:42:54.086000 audit: BPF prog-id=107 op=UNLOAD Jan 14 01:42:54.086000 audit[2586]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237326132376235346364393036616463623566653137316338376139 Jan 14 01:42:54.086000 audit: BPF prog-id=108 op=LOAD Jan 14 01:42:54.086000 audit[2586]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2480 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237326132376235346364393036616463623566653137316338376139 Jan 14 01:42:54.087000 audit: BPF prog-id=109 op=LOAD Jan 14 01:42:54.087000 audit[2586]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2480 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237326132376235346364393036616463623566653137316338376139 Jan 14 01:42:54.087000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:42:54.087000 audit[2586]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237326132376235346364393036616463623566653137316338376139 Jan 14 01:42:54.087000 audit: BPF prog-id=108 op=UNLOAD Jan 14 01:42:54.087000 audit[2586]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237326132376235346364393036616463623566653137316338376139 Jan 14 01:42:54.087000 audit: BPF prog-id=110 op=LOAD Jan 14 01:42:54.087000 audit[2586]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2480 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237326132376235346364393036616463623566653137316338376139 Jan 14 01:42:54.095091 containerd[1600]: time="2026-01-14T01:42:54.095023938Z" level=info msg="CreateContainer within sandbox \"5457fe648f731c6bf8501a2be50e9fb175e0517cf279b3e72a0e32634000b889\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec3c0959bd0209da61b9d102c4eb05312dd6fc753651d9ff720dd000a3b87790\"" Jan 14 01:42:54.096910 containerd[1600]: time="2026-01-14T01:42:54.096492118Z" level=info msg="StartContainer for \"ec3c0959bd0209da61b9d102c4eb05312dd6fc753651d9ff720dd000a3b87790\"" Jan 14 01:42:54.097507 containerd[1600]: time="2026-01-14T01:42:54.097452457Z" level=info msg="connecting to shim ec3c0959bd0209da61b9d102c4eb05312dd6fc753651d9ff720dd000a3b87790" address="unix:///run/containerd/s/eea230144499dd04506fa922bebc9f4e41345a4103c0d5a1ca7b2d5eedb91c7e" protocol=ttrpc version=3 Jan 14 01:42:54.120400 systemd[1]: Started cri-containerd-ec3c0959bd0209da61b9d102c4eb05312dd6fc753651d9ff720dd000a3b87790.scope - libcontainer container ec3c0959bd0209da61b9d102c4eb05312dd6fc753651d9ff720dd000a3b87790. Jan 14 01:42:54.125289 containerd[1600]: time="2026-01-14T01:42:54.125261853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-193-229,Uid:513df8d42a4a98d528c60075e2a30332,Namespace:kube-system,Attempt:0,} returns sandbox id \"0413820811de242f5f761dcdf92c358a978b9fde08f4099b63ea612af92e9756\"" Jan 14 01:42:54.126368 kubelet[2426]: E0114 01:42:54.126340 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:54.130559 containerd[1600]: time="2026-01-14T01:42:54.130539491Z" level=info msg="CreateContainer within sandbox \"0413820811de242f5f761dcdf92c358a978b9fde08f4099b63ea612af92e9756\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:42:54.141542 containerd[1600]: time="2026-01-14T01:42:54.141502655Z" level=info msg="Container 7949b518e4bd83bd684261662b29235be21c50c206718e49acc048935cabc417: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:42:54.148000 audit: BPF prog-id=111 op=LOAD Jan 14 01:42:54.149000 audit: BPF prog-id=112 op=LOAD Jan 14 01:42:54.149000 audit[2613]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2474 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563336330393539626430323039646136316239643130326334656230 Jan 14 01:42:54.149000 audit: BPF prog-id=112 op=UNLOAD Jan 14 01:42:54.149000 audit[2613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563336330393539626430323039646136316239643130326334656230 Jan 14 01:42:54.153000 audit: BPF prog-id=113 op=LOAD Jan 14 01:42:54.153000 audit[2613]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2474 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563336330393539626430323039646136316239643130326334656230 Jan 14 01:42:54.153000 audit: BPF prog-id=114 op=LOAD Jan 14 01:42:54.153000 audit[2613]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2474 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563336330393539626430323039646136316239643130326334656230 Jan 14 01:42:54.153000 audit: BPF prog-id=114 op=UNLOAD Jan 14 01:42:54.153000 audit[2613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563336330393539626430323039646136316239643130326334656230 Jan 14 01:42:54.153000 audit: BPF prog-id=113 op=UNLOAD Jan 14 01:42:54.153000 audit[2613]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563336330393539626430323039646136316239643130326334656230 Jan 14 01:42:54.153000 audit: BPF prog-id=115 op=LOAD Jan 14 01:42:54.153000 audit[2613]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2474 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563336330393539626430323039646136316239643130326334656230 Jan 14 01:42:54.160313 containerd[1600]: time="2026-01-14T01:42:54.159390726Z" level=info msg="CreateContainer within sandbox \"0413820811de242f5f761dcdf92c358a978b9fde08f4099b63ea612af92e9756\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7949b518e4bd83bd684261662b29235be21c50c206718e49acc048935cabc417\"" Jan 14 01:42:54.160714 containerd[1600]: time="2026-01-14T01:42:54.160691836Z" level=info msg="StartContainer for \"7949b518e4bd83bd684261662b29235be21c50c206718e49acc048935cabc417\"" Jan 14 01:42:54.161991 containerd[1600]: time="2026-01-14T01:42:54.161592185Z" level=info msg="connecting to shim 7949b518e4bd83bd684261662b29235be21c50c206718e49acc048935cabc417" address="unix:///run/containerd/s/1dc88558d541c17b709a4a98fb2426a23348dc8124fe2b878d7ab016c1b8ad1e" protocol=ttrpc version=3 Jan 14 01:42:54.166434 containerd[1600]: time="2026-01-14T01:42:54.166406283Z" level=info msg="StartContainer for \"b72a27b54cd906adcb5fe171c87a94194c955fbe6139f26b472a9b1a5caa8af6\" returns successfully" Jan 14 01:42:54.190750 kubelet[2426]: I0114 01:42:54.190682 2426 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-229" Jan 14 01:42:54.191155 kubelet[2426]: E0114 01:42:54.191131 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.193.229:6443/api/v1/nodes\": dial tcp 172.239.193.229:6443: connect: connection refused" node="172-239-193-229" Jan 14 01:42:54.191549 systemd[1]: Started cri-containerd-7949b518e4bd83bd684261662b29235be21c50c206718e49acc048935cabc417.scope - libcontainer container 7949b518e4bd83bd684261662b29235be21c50c206718e49acc048935cabc417. Jan 14 01:42:54.214197 containerd[1600]: time="2026-01-14T01:42:54.214161369Z" level=info msg="StartContainer for \"ec3c0959bd0209da61b9d102c4eb05312dd6fc753651d9ff720dd000a3b87790\" returns successfully" Jan 14 01:42:54.243000 audit: BPF prog-id=116 op=LOAD Jan 14 01:42:54.244000 audit: BPF prog-id=117 op=LOAD Jan 14 01:42:54.244000 audit[2648]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2535 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343962353138653462643833626436383432363136363262323932 Jan 14 01:42:54.244000 audit: BPF prog-id=117 op=UNLOAD Jan 14 01:42:54.244000 audit[2648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343962353138653462643833626436383432363136363262323932 Jan 14 01:42:54.244000 audit: BPF prog-id=118 op=LOAD Jan 14 01:42:54.244000 audit[2648]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2535 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343962353138653462643833626436383432363136363262323932 Jan 14 01:42:54.244000 audit: BPF prog-id=119 op=LOAD Jan 14 01:42:54.244000 audit[2648]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2535 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343962353138653462643833626436383432363136363262323932 Jan 14 01:42:54.245000 audit: BPF prog-id=119 op=UNLOAD Jan 14 01:42:54.245000 audit[2648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343962353138653462643833626436383432363136363262323932 Jan 14 01:42:54.245000 audit: BPF prog-id=118 op=UNLOAD Jan 14 01:42:54.245000 audit[2648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343962353138653462643833626436383432363136363262323932 Jan 14 01:42:54.245000 audit: BPF prog-id=120 op=LOAD Jan 14 01:42:54.245000 audit[2648]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2535 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:42:54.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343962353138653462643833626436383432363136363262323932 Jan 14 01:42:54.299426 containerd[1600]: time="2026-01-14T01:42:54.299367096Z" level=info msg="StartContainer for \"7949b518e4bd83bd684261662b29235be21c50c206718e49acc048935cabc417\" returns successfully" Jan 14 01:42:54.448787 kubelet[2426]: E0114 01:42:54.448532 2426 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-229\" not found" node="172-239-193-229" Jan 14 01:42:54.448787 kubelet[2426]: E0114 01:42:54.448666 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:54.449956 kubelet[2426]: E0114 01:42:54.449618 2426 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-229\" not found" node="172-239-193-229" Jan 14 01:42:54.449956 kubelet[2426]: E0114 01:42:54.449704 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:54.453428 kubelet[2426]: E0114 01:42:54.453414 2426 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-229\" not found" node="172-239-193-229" Jan 14 01:42:54.453573 kubelet[2426]: E0114 01:42:54.453561 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:54.994966 kubelet[2426]: I0114 01:42:54.994467 2426 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-229" Jan 14 01:42:55.456894 kubelet[2426]: E0114 01:42:55.456462 2426 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-229\" not found" node="172-239-193-229" Jan 14 01:42:55.456894 kubelet[2426]: E0114 01:42:55.456570 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:55.456894 kubelet[2426]: E0114 01:42:55.456777 2426 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-229\" not found" node="172-239-193-229" Jan 14 01:42:55.456894 kubelet[2426]: E0114 01:42:55.456853 2426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:55.822644 kubelet[2426]: E0114 01:42:55.822535 2426 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-239-193-229\" not found" node="172-239-193-229" Jan 14 01:42:55.866457 kubelet[2426]: I0114 01:42:55.866423 2426 kubelet_node_status.go:78] "Successfully registered node" node="172-239-193-229" Jan 14 01:42:55.906097 kubelet[2426]: I0114 01:42:55.906067 2426 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-229" Jan 14 01:42:55.910294 kubelet[2426]: E0114 01:42:55.910139 2426 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-193-229\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-193-229" Jan 14 01:42:55.910294 kubelet[2426]: I0114 01:42:55.910156 2426 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:55.911412 kubelet[2426]: E0114 01:42:55.911392 2426 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-193-229\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:55.911412 kubelet[2426]: I0114 01:42:55.911408 2426 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-193-229" Jan 14 01:42:55.913276 kubelet[2426]: E0114 01:42:55.912337 2426 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-193-229\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-193-229" Jan 14 01:42:56.388807 kubelet[2426]: I0114 01:42:56.388552 2426 apiserver.go:52] "Watching apiserver" Jan 14 01:42:56.406540 kubelet[2426]: I0114 01:42:56.406525 2426 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:42:57.748361 systemd[1]: Reload requested from client PID 2705 ('systemctl') (unit session-8.scope)... Jan 14 01:42:57.748387 systemd[1]: Reloading... Jan 14 01:42:57.905307 zram_generator::config[2755]: No configuration found. Jan 14 01:42:58.160651 systemd[1]: Reloading finished in 411 ms. Jan 14 01:42:58.194294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:42:58.217796 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:42:58.218185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:42:58.222304 kernel: kauditd_printk_skb: 210 callbacks suppressed Jan 14 01:42:58.222436 kernel: audit: type=1131 audit(1768354978.218:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:58.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:58.218432 systemd[1]: kubelet.service: Consumed 1.021s CPU time, 132.5M memory peak. Jan 14 01:42:58.226611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:42:58.231291 kernel: audit: type=1334 audit(1768354978.228:408): prog-id=121 op=LOAD Jan 14 01:42:58.228000 audit: BPF prog-id=121 op=LOAD Jan 14 01:42:58.228000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:42:58.236267 kernel: audit: type=1334 audit(1768354978.228:409): prog-id=75 op=UNLOAD Jan 14 01:42:58.229000 audit: BPF prog-id=122 op=LOAD Jan 14 01:42:58.240547 kernel: audit: type=1334 audit(1768354978.229:410): prog-id=122 op=LOAD Jan 14 01:42:58.240586 kernel: audit: type=1334 audit(1768354978.229:411): prog-id=86 op=UNLOAD Jan 14 01:42:58.229000 audit: BPF prog-id=86 op=UNLOAD Jan 14 01:42:58.242614 kernel: audit: type=1334 audit(1768354978.230:412): prog-id=123 op=LOAD Jan 14 01:42:58.230000 audit: BPF prog-id=123 op=LOAD Jan 14 01:42:58.244860 kernel: audit: type=1334 audit(1768354978.230:413): prog-id=79 op=UNLOAD Jan 14 01:42:58.230000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:42:58.247015 kernel: audit: type=1334 audit(1768354978.230:414): prog-id=124 op=LOAD Jan 14 01:42:58.230000 audit: BPF prog-id=124 op=LOAD Jan 14 01:42:58.249164 kernel: audit: type=1334 audit(1768354978.230:415): prog-id=125 op=LOAD Jan 14 01:42:58.230000 audit: BPF prog-id=125 op=LOAD Jan 14 01:42:58.251203 kernel: audit: type=1334 audit(1768354978.230:416): prog-id=80 op=UNLOAD Jan 14 01:42:58.230000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:42:58.230000 audit: BPF prog-id=81 op=UNLOAD Jan 14 01:42:58.233000 audit: BPF prog-id=126 op=LOAD Jan 14 01:42:58.233000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:42:58.233000 audit: BPF prog-id=127 op=LOAD Jan 14 01:42:58.233000 audit: BPF prog-id=128 op=LOAD Jan 14 01:42:58.233000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:42:58.233000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:42:58.234000 audit: BPF prog-id=129 op=LOAD Jan 14 01:42:58.234000 audit: BPF prog-id=83 op=UNLOAD Jan 14 01:42:58.234000 audit: BPF prog-id=130 op=LOAD Jan 14 01:42:58.234000 audit: BPF prog-id=131 op=LOAD Jan 14 01:42:58.234000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:42:58.234000 audit: BPF prog-id=85 op=UNLOAD Jan 14 01:42:58.236000 audit: BPF prog-id=132 op=LOAD Jan 14 01:42:58.236000 audit: BPF prog-id=67 op=UNLOAD Jan 14 01:42:58.236000 audit: BPF prog-id=133 op=LOAD Jan 14 01:42:58.236000 audit: BPF prog-id=134 op=LOAD Jan 14 01:42:58.236000 audit: BPF prog-id=68 op=UNLOAD Jan 14 01:42:58.236000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:42:58.237000 audit: BPF prog-id=135 op=LOAD Jan 14 01:42:58.237000 audit: BPF prog-id=87 op=UNLOAD Jan 14 01:42:58.242000 audit: BPF prog-id=136 op=LOAD Jan 14 01:42:58.247000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:42:58.247000 audit: BPF prog-id=137 op=LOAD Jan 14 01:42:58.247000 audit: BPF prog-id=138 op=LOAD Jan 14 01:42:58.247000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:42:58.247000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:42:58.247000 audit: BPF prog-id=139 op=LOAD Jan 14 01:42:58.247000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:42:58.247000 audit: BPF prog-id=140 op=LOAD Jan 14 01:42:58.247000 audit: BPF prog-id=88 op=UNLOAD Jan 14 01:42:58.249000 audit: BPF prog-id=141 op=LOAD Jan 14 01:42:58.249000 audit: BPF prog-id=142 op=LOAD Jan 14 01:42:58.249000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:42:58.249000 audit: BPF prog-id=90 op=UNLOAD Jan 14 01:42:58.249000 audit: BPF prog-id=143 op=LOAD Jan 14 01:42:58.249000 audit: BPF prog-id=144 op=LOAD Jan 14 01:42:58.249000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:42:58.249000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:42:58.413528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:42:58.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:42:58.424589 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:42:58.461285 kubelet[2803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:42:58.461285 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:42:58.461285 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:42:58.461873 kubelet[2803]: I0114 01:42:58.461342 2803 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:42:58.472237 kubelet[2803]: I0114 01:42:58.470614 2803 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:42:58.472425 kubelet[2803]: I0114 01:42:58.472390 2803 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:42:58.472748 kubelet[2803]: I0114 01:42:58.472717 2803 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:42:58.474114 kubelet[2803]: I0114 01:42:58.474080 2803 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 01:42:58.476778 kubelet[2803]: I0114 01:42:58.476740 2803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:42:58.482504 kubelet[2803]: I0114 01:42:58.482465 2803 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:42:58.486622 kubelet[2803]: I0114 01:42:58.486586 2803 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:42:58.486905 kubelet[2803]: I0114 01:42:58.486861 2803 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:42:58.487061 kubelet[2803]: I0114 01:42:58.486890 2803 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-193-229","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:42:58.487061 kubelet[2803]: I0114 01:42:58.487028 2803 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:42:58.487061 kubelet[2803]: I0114 01:42:58.487036 2803 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:42:58.487214 kubelet[2803]: I0114 01:42:58.487079 2803 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:42:58.487368 kubelet[2803]: I0114 01:42:58.487345 2803 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:42:58.487780 kubelet[2803]: I0114 01:42:58.487695 2803 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:42:58.487780 kubelet[2803]: I0114 01:42:58.487730 2803 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:42:58.487780 kubelet[2803]: I0114 01:42:58.487744 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:42:58.490656 kubelet[2803]: I0114 01:42:58.490478 2803 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:42:58.490927 kubelet[2803]: I0114 01:42:58.490892 2803 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:42:58.493788 kubelet[2803]: I0114 01:42:58.493467 2803 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:42:58.493788 kubelet[2803]: I0114 01:42:58.493648 2803 server.go:1289] "Started kubelet" Jan 14 01:42:58.496280 kubelet[2803]: I0114 01:42:58.496227 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:42:58.504270 kubelet[2803]: I0114 01:42:58.502438 2803 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:42:58.506614 kubelet[2803]: I0114 01:42:58.505340 2803 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:42:58.513461 kubelet[2803]: I0114 01:42:58.513410 2803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:42:58.514456 kubelet[2803]: I0114 01:42:58.513650 2803 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:42:58.514456 kubelet[2803]: I0114 01:42:58.514186 2803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:42:58.518269 kubelet[2803]: I0114 01:42:58.517508 2803 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:42:58.518269 kubelet[2803]: E0114 01:42:58.517893 2803 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-193-229\" not found" Jan 14 01:42:58.519767 kubelet[2803]: I0114 01:42:58.519052 2803 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:42:58.519767 kubelet[2803]: I0114 01:42:58.519231 2803 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:42:58.522737 kubelet[2803]: I0114 01:42:58.522707 2803 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:42:58.523041 kubelet[2803]: I0114 01:42:58.522899 2803 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:42:58.532404 kubelet[2803]: I0114 01:42:58.531507 2803 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:42:58.540646 kubelet[2803]: I0114 01:42:58.540594 2803 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:42:58.542150 kubelet[2803]: I0114 01:42:58.542114 2803 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:42:58.542150 kubelet[2803]: I0114 01:42:58.542137 2803 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:42:58.542150 kubelet[2803]: I0114 01:42:58.542154 2803 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:42:58.542357 kubelet[2803]: I0114 01:42:58.542162 2803 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:42:58.542357 kubelet[2803]: E0114 01:42:58.542204 2803 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:42:58.596832 kubelet[2803]: I0114 01:42:58.596787 2803 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:42:58.596832 kubelet[2803]: I0114 01:42:58.596803 2803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:42:58.596832 kubelet[2803]: I0114 01:42:58.596822 2803 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:42:58.597130 kubelet[2803]: I0114 01:42:58.596946 2803 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:42:58.597130 kubelet[2803]: I0114 01:42:58.596956 2803 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:42:58.597130 kubelet[2803]: I0114 01:42:58.596972 2803 policy_none.go:49] "None policy: Start" Jan 14 01:42:58.597130 kubelet[2803]: I0114 01:42:58.596981 2803 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:42:58.597130 kubelet[2803]: I0114 01:42:58.596991 2803 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:42:58.597130 kubelet[2803]: I0114 01:42:58.597074 2803 state_mem.go:75] "Updated machine memory state" Jan 14 01:42:58.606207 kubelet[2803]: E0114 01:42:58.605622 2803 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:42:58.606207 kubelet[2803]: I0114 01:42:58.605824 2803 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:42:58.606207 kubelet[2803]: I0114 01:42:58.605836 2803 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:42:58.606207 kubelet[2803]: I0114 01:42:58.606091 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:42:58.607111 kubelet[2803]: E0114 01:42:58.607081 2803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:42:58.643647 kubelet[2803]: I0114 01:42:58.643615 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-229" Jan 14 01:42:58.643936 kubelet[2803]: I0114 01:42:58.643648 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-193-229" Jan 14 01:42:58.644095 kubelet[2803]: I0114 01:42:58.643777 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:58.709940 kubelet[2803]: I0114 01:42:58.709040 2803 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-229" Jan 14 01:42:58.717933 kubelet[2803]: I0114 01:42:58.717807 2803 kubelet_node_status.go:124] "Node was previously registered" node="172-239-193-229" Jan 14 01:42:58.718429 kubelet[2803]: I0114 01:42:58.718240 2803 kubelet_node_status.go:78] "Successfully registered node" node="172-239-193-229" Jan 14 01:42:58.720272 kubelet[2803]: I0114 01:42:58.720057 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b006bffc2533fd8e9e97a0b16e60f945-k8s-certs\") pod \"kube-apiserver-172-239-193-229\" (UID: \"b006bffc2533fd8e9e97a0b16e60f945\") " pod="kube-system/kube-apiserver-172-239-193-229" Jan 14 01:42:58.720272 kubelet[2803]: I0114 01:42:58.720101 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b006bffc2533fd8e9e97a0b16e60f945-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-193-229\" (UID: \"b006bffc2533fd8e9e97a0b16e60f945\") " pod="kube-system/kube-apiserver-172-239-193-229" Jan 14 01:42:58.720272 kubelet[2803]: I0114 01:42:58.720124 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73def85ddab0dff5a16485cb358811f0-flexvolume-dir\") pod \"kube-controller-manager-172-239-193-229\" (UID: \"73def85ddab0dff5a16485cb358811f0\") " pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:58.720272 kubelet[2803]: I0114 01:42:58.720144 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73def85ddab0dff5a16485cb358811f0-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-193-229\" (UID: \"73def85ddab0dff5a16485cb358811f0\") " pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:58.720272 kubelet[2803]: I0114 01:42:58.720165 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b006bffc2533fd8e9e97a0b16e60f945-ca-certs\") pod \"kube-apiserver-172-239-193-229\" (UID: \"b006bffc2533fd8e9e97a0b16e60f945\") " pod="kube-system/kube-apiserver-172-239-193-229" Jan 14 01:42:58.720431 kubelet[2803]: I0114 01:42:58.720182 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73def85ddab0dff5a16485cb358811f0-ca-certs\") pod \"kube-controller-manager-172-239-193-229\" (UID: \"73def85ddab0dff5a16485cb358811f0\") " pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:58.720431 kubelet[2803]: I0114 01:42:58.720198 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73def85ddab0dff5a16485cb358811f0-k8s-certs\") pod \"kube-controller-manager-172-239-193-229\" (UID: \"73def85ddab0dff5a16485cb358811f0\") " pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:58.720431 kubelet[2803]: I0114 01:42:58.720216 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73def85ddab0dff5a16485cb358811f0-kubeconfig\") pod \"kube-controller-manager-172-239-193-229\" (UID: \"73def85ddab0dff5a16485cb358811f0\") " pod="kube-system/kube-controller-manager-172-239-193-229" Jan 14 01:42:58.720431 kubelet[2803]: I0114 01:42:58.720235 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/513df8d42a4a98d528c60075e2a30332-kubeconfig\") pod \"kube-scheduler-172-239-193-229\" (UID: \"513df8d42a4a98d528c60075e2a30332\") " pod="kube-system/kube-scheduler-172-239-193-229" Jan 14 01:42:58.948702 kubelet[2803]: E0114 01:42:58.948647 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:58.951099 kubelet[2803]: E0114 01:42:58.950671 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:58.951099 kubelet[2803]: E0114 01:42:58.950858 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:59.496678 kubelet[2803]: I0114 01:42:59.496632 2803 apiserver.go:52] "Watching apiserver" Jan 14 01:42:59.520178 kubelet[2803]: I0114 01:42:59.520139 2803 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:42:59.576177 kubelet[2803]: E0114 01:42:59.575927 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:59.577004 kubelet[2803]: I0114 01:42:59.576947 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-193-229" Jan 14 01:42:59.577213 kubelet[2803]: I0114 01:42:59.577188 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-229" Jan 14 01:42:59.583537 kubelet[2803]: E0114 01:42:59.583480 2803 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-193-229\" already exists" pod="kube-system/kube-scheduler-172-239-193-229" Jan 14 01:42:59.583963 kubelet[2803]: E0114 01:42:59.583947 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:59.584684 kubelet[2803]: E0114 01:42:59.584647 2803 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-193-229\" already exists" pod="kube-system/kube-apiserver-172-239-193-229" Jan 14 01:42:59.584997 kubelet[2803]: E0114 01:42:59.584965 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:42:59.602038 kubelet[2803]: I0114 01:42:59.601910 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-193-229" podStartSLOduration=1.601896024 podStartE2EDuration="1.601896024s" podCreationTimestamp="2026-01-14 01:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:42:59.595647167 +0000 UTC m=+1.165691188" watchObservedRunningTime="2026-01-14 01:42:59.601896024 +0000 UTC m=+1.171940035" Jan 14 01:42:59.609170 kubelet[2803]: I0114 01:42:59.609082 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-193-229" podStartSLOduration=1.609067901 podStartE2EDuration="1.609067901s" podCreationTimestamp="2026-01-14 01:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:42:59.602781844 +0000 UTC m=+1.172825855" watchObservedRunningTime="2026-01-14 01:42:59.609067901 +0000 UTC m=+1.179111912" Jan 14 01:42:59.614813 kubelet[2803]: I0114 01:42:59.614689 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-193-229" podStartSLOduration=1.614672028 podStartE2EDuration="1.614672028s" podCreationTimestamp="2026-01-14 01:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:42:59.609380971 +0000 UTC m=+1.179424982" watchObservedRunningTime="2026-01-14 01:42:59.614672028 +0000 UTC m=+1.184716039" Jan 14 01:43:00.577193 kubelet[2803]: E0114 01:43:00.577034 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:00.577193 kubelet[2803]: E0114 01:43:00.577050 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:01.534592 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 14 01:43:01.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:43:01.550000 audit: BPF prog-id=129 op=UNLOAD Jan 14 01:43:01.578371 kubelet[2803]: E0114 01:43:01.578322 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:02.876525 kubelet[2803]: I0114 01:43:02.876239 2803 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:43:02.878632 kubelet[2803]: I0114 01:43:02.878310 2803 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:43:02.878979 containerd[1600]: time="2026-01-14T01:43:02.877812336Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:43:03.595884 kubelet[2803]: E0114 01:43:03.595781 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:03.647194 systemd[1]: Created slice kubepods-besteffort-podf4fe304f_a149_45b1_b092_58cc2a227746.slice - libcontainer container kubepods-besteffort-podf4fe304f_a149_45b1_b092_58cc2a227746.slice. Jan 14 01:43:03.653121 kubelet[2803]: I0114 01:43:03.653018 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4fe304f-a149-45b1-b092-58cc2a227746-kube-proxy\") pod \"kube-proxy-csqr5\" (UID: \"f4fe304f-a149-45b1-b092-58cc2a227746\") " pod="kube-system/kube-proxy-csqr5" Jan 14 01:43:03.653121 kubelet[2803]: I0114 01:43:03.653049 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shn5w\" (UniqueName: \"kubernetes.io/projected/f4fe304f-a149-45b1-b092-58cc2a227746-kube-api-access-shn5w\") pod \"kube-proxy-csqr5\" (UID: \"f4fe304f-a149-45b1-b092-58cc2a227746\") " pod="kube-system/kube-proxy-csqr5" Jan 14 01:43:03.653121 kubelet[2803]: I0114 01:43:03.653068 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4fe304f-a149-45b1-b092-58cc2a227746-xtables-lock\") pod \"kube-proxy-csqr5\" (UID: \"f4fe304f-a149-45b1-b092-58cc2a227746\") " pod="kube-system/kube-proxy-csqr5" Jan 14 01:43:03.653121 kubelet[2803]: I0114 01:43:03.653082 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4fe304f-a149-45b1-b092-58cc2a227746-lib-modules\") pod \"kube-proxy-csqr5\" (UID: \"f4fe304f-a149-45b1-b092-58cc2a227746\") " pod="kube-system/kube-proxy-csqr5" Jan 14 01:43:03.867038 systemd[1]: Created slice kubepods-besteffort-pod6e2c5398_ea98_40ff_bc5f_e147f52551fe.slice - libcontainer container kubepods-besteffort-pod6e2c5398_ea98_40ff_bc5f_e147f52551fe.slice. Jan 14 01:43:03.955575 kubelet[2803]: I0114 01:43:03.955520 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e2c5398-ea98-40ff-bc5f-e147f52551fe-var-lib-calico\") pod \"tigera-operator-7dcd859c48-q52k7\" (UID: \"6e2c5398-ea98-40ff-bc5f-e147f52551fe\") " pod="tigera-operator/tigera-operator-7dcd859c48-q52k7" Jan 14 01:43:03.955965 kubelet[2803]: I0114 01:43:03.955585 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsqmj\" (UniqueName: \"kubernetes.io/projected/6e2c5398-ea98-40ff-bc5f-e147f52551fe-kube-api-access-tsqmj\") pod \"tigera-operator-7dcd859c48-q52k7\" (UID: \"6e2c5398-ea98-40ff-bc5f-e147f52551fe\") " pod="tigera-operator/tigera-operator-7dcd859c48-q52k7" Jan 14 01:43:03.957954 kubelet[2803]: E0114 01:43:03.957916 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:03.962816 containerd[1600]: time="2026-01-14T01:43:03.959760385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-csqr5,Uid:f4fe304f-a149-45b1-b092-58cc2a227746,Namespace:kube-system,Attempt:0,}" Jan 14 01:43:03.981664 containerd[1600]: time="2026-01-14T01:43:03.980755424Z" level=info msg="connecting to shim 1a557b1278461736b346882dca0e32f63a46f101dfe6ba66102d76e3edfe639e" address="unix:///run/containerd/s/ad642a0824b1f37588a13c63002b2c516ac34e1f6edd3afedf4a8d7ba16bb27c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:43:04.007411 systemd[1]: Started cri-containerd-1a557b1278461736b346882dca0e32f63a46f101dfe6ba66102d76e3edfe639e.scope - libcontainer container 1a557b1278461736b346882dca0e32f63a46f101dfe6ba66102d76e3edfe639e. Jan 14 01:43:04.021281 kernel: kauditd_printk_skb: 42 callbacks suppressed Jan 14 01:43:04.021382 kernel: audit: type=1334 audit(1768354984.018:459): prog-id=145 op=LOAD Jan 14 01:43:04.018000 audit: BPF prog-id=145 op=LOAD Jan 14 01:43:04.023000 audit: BPF prog-id=146 op=LOAD Jan 14 01:43:04.023000 audit[2875]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.028429 kernel: audit: type=1334 audit(1768354984.023:460): prog-id=146 op=LOAD Jan 14 01:43:04.028502 kernel: audit: type=1300 audit(1768354984.023:460): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.035878 kernel: audit: type=1327 audit(1768354984.023:460): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161353537623132373834363137333662333436383832646361306533 Jan 14 01:43:04.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161353537623132373834363137333662333436383832646361306533 Jan 14 01:43:04.023000 audit: BPF prog-id=146 op=UNLOAD Jan 14 01:43:04.023000 audit[2875]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.045971 kernel: audit: type=1334 audit(1768354984.023:461): prog-id=146 op=UNLOAD Jan 14 01:43:04.046014 kernel: audit: type=1300 audit(1768354984.023:461): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.023000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161353537623132373834363137333662333436383832646361306533 Jan 14 01:43:04.054229 kernel: audit: type=1327 audit(1768354984.023:461): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161353537623132373834363137333662333436383832646361306533 Jan 14 01:43:04.060353 kernel: audit: type=1334 audit(1768354984.024:462): prog-id=147 op=LOAD Jan 14 01:43:04.024000 audit: BPF prog-id=147 op=LOAD Jan 14 01:43:04.024000 audit[2875]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.063576 kernel: audit: type=1300 audit(1768354984.024:462): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.069063 containerd[1600]: time="2026-01-14T01:43:04.068961110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-csqr5,Uid:f4fe304f-a149-45b1-b092-58cc2a227746,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a557b1278461736b346882dca0e32f63a46f101dfe6ba66102d76e3edfe639e\"" Jan 14 01:43:04.070300 kubelet[2803]: E0114 01:43:04.069968 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:04.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161353537623132373834363137333662333436383832646361306533 Jan 14 01:43:04.083288 kernel: audit: type=1327 audit(1768354984.024:462): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161353537623132373834363137333662333436383832646361306533 Jan 14 01:43:04.024000 audit: BPF prog-id=148 op=LOAD Jan 14 01:43:04.024000 audit[2875]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161353537623132373834363137333662333436383832646361306533 Jan 14 01:43:04.024000 audit: BPF prog-id=148 op=UNLOAD Jan 14 01:43:04.024000 audit[2875]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161353537623132373834363137333662333436383832646361306533 Jan 14 01:43:04.024000 audit: BPF prog-id=147 op=UNLOAD Jan 14 01:43:04.024000 audit[2875]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161353537623132373834363137333662333436383832646361306533 Jan 14 01:43:04.024000 audit: BPF prog-id=149 op=LOAD Jan 14 01:43:04.024000 audit[2875]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2864 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161353537623132373834363137333662333436383832646361306533 Jan 14 01:43:04.084175 containerd[1600]: time="2026-01-14T01:43:04.084144773Z" level=info msg="CreateContainer within sandbox \"1a557b1278461736b346882dca0e32f63a46f101dfe6ba66102d76e3edfe639e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:43:04.092547 containerd[1600]: time="2026-01-14T01:43:04.092526548Z" level=info msg="Container 2fcdbf90fedc7decdbb0469292195a3aab7a377ae5d0ff7a964fc2f6db3f7a50: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:43:04.097875 containerd[1600]: time="2026-01-14T01:43:04.097844396Z" level=info msg="CreateContainer within sandbox \"1a557b1278461736b346882dca0e32f63a46f101dfe6ba66102d76e3edfe639e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2fcdbf90fedc7decdbb0469292195a3aab7a377ae5d0ff7a964fc2f6db3f7a50\"" Jan 14 01:43:04.099779 containerd[1600]: time="2026-01-14T01:43:04.098556825Z" level=info msg="StartContainer for \"2fcdbf90fedc7decdbb0469292195a3aab7a377ae5d0ff7a964fc2f6db3f7a50\"" Jan 14 01:43:04.099910 containerd[1600]: time="2026-01-14T01:43:04.099891935Z" level=info msg="connecting to shim 2fcdbf90fedc7decdbb0469292195a3aab7a377ae5d0ff7a964fc2f6db3f7a50" address="unix:///run/containerd/s/ad642a0824b1f37588a13c63002b2c516ac34e1f6edd3afedf4a8d7ba16bb27c" protocol=ttrpc version=3 Jan 14 01:43:04.119410 systemd[1]: Started cri-containerd-2fcdbf90fedc7decdbb0469292195a3aab7a377ae5d0ff7a964fc2f6db3f7a50.scope - libcontainer container 2fcdbf90fedc7decdbb0469292195a3aab7a377ae5d0ff7a964fc2f6db3f7a50. Jan 14 01:43:04.173192 containerd[1600]: time="2026-01-14T01:43:04.173001498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-q52k7,Uid:6e2c5398-ea98-40ff-bc5f-e147f52551fe,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:43:04.173000 audit: BPF prog-id=150 op=LOAD Jan 14 01:43:04.173000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2864 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266636462663930666564633764656364626230343639323932313935 Jan 14 01:43:04.173000 audit: BPF prog-id=151 op=LOAD Jan 14 01:43:04.173000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2864 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266636462663930666564633764656364626230343639323932313935 Jan 14 01:43:04.173000 audit: BPF prog-id=151 op=UNLOAD Jan 14 01:43:04.173000 audit[2902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2864 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266636462663930666564633764656364626230343639323932313935 Jan 14 01:43:04.173000 audit: BPF prog-id=150 op=UNLOAD Jan 14 01:43:04.173000 audit[2902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2864 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266636462663930666564633764656364626230343639323932313935 Jan 14 01:43:04.174000 audit: BPF prog-id=152 op=LOAD Jan 14 01:43:04.174000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2864 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266636462663930666564633764656364626230343639323932313935 Jan 14 01:43:04.206873 containerd[1600]: time="2026-01-14T01:43:04.206832561Z" level=info msg="connecting to shim b2c47ddf30b3ae4f789f6adb88da58a281b3d9dc0bdb101ea921222f57b959e4" address="unix:///run/containerd/s/5f98226891e234b331f1823e32117175ddb79dc57ccea9ed5186aabf6eefe67c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:43:04.208410 containerd[1600]: time="2026-01-14T01:43:04.208386320Z" level=info msg="StartContainer for \"2fcdbf90fedc7decdbb0469292195a3aab7a377ae5d0ff7a964fc2f6db3f7a50\" returns successfully" Jan 14 01:43:04.243595 systemd[1]: Started cri-containerd-b2c47ddf30b3ae4f789f6adb88da58a281b3d9dc0bdb101ea921222f57b959e4.scope - libcontainer container b2c47ddf30b3ae4f789f6adb88da58a281b3d9dc0bdb101ea921222f57b959e4. Jan 14 01:43:04.258000 audit: BPF prog-id=153 op=LOAD Jan 14 01:43:04.258000 audit: BPF prog-id=154 op=LOAD Jan 14 01:43:04.258000 audit[2951]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2937 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232633437646466333062336165346637383966366164623838646135 Jan 14 01:43:04.258000 audit: BPF prog-id=154 op=UNLOAD Jan 14 01:43:04.258000 audit[2951]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2937 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232633437646466333062336165346637383966366164623838646135 Jan 14 01:43:04.258000 audit: BPF prog-id=155 op=LOAD Jan 14 01:43:04.258000 audit[2951]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2937 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232633437646466333062336165346637383966366164623838646135 Jan 14 01:43:04.258000 audit: BPF prog-id=156 op=LOAD Jan 14 01:43:04.258000 audit[2951]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2937 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232633437646466333062336165346637383966366164623838646135 Jan 14 01:43:04.258000 audit: BPF prog-id=156 op=UNLOAD Jan 14 01:43:04.258000 audit[2951]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2937 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232633437646466333062336165346637383966366164623838646135 Jan 14 01:43:04.258000 audit: BPF prog-id=155 op=UNLOAD Jan 14 01:43:04.258000 audit[2951]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2937 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232633437646466333062336165346637383966366164623838646135 Jan 14 01:43:04.258000 audit: BPF prog-id=157 op=LOAD Jan 14 01:43:04.258000 audit[2951]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2937 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232633437646466333062336165346637383966366164623838646135 Jan 14 01:43:04.295529 containerd[1600]: time="2026-01-14T01:43:04.295452097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-q52k7,Uid:6e2c5398-ea98-40ff-bc5f-e147f52551fe,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b2c47ddf30b3ae4f789f6adb88da58a281b3d9dc0bdb101ea921222f57b959e4\"" Jan 14 01:43:04.297515 containerd[1600]: time="2026-01-14T01:43:04.297348336Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:43:04.355000 audit[3016]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.355000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6161f400 a2=0 a3=7fff6161f3ec items=0 ppid=2916 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:43:04.356000 audit[3015]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.356000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff724801b0 a2=0 a3=7fff7248019c items=0 ppid=2916 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:43:04.358000 audit[3017]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.358000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe948cb870 a2=0 a3=7ffe948cb85c items=0 ppid=2916 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.358000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:43:04.359000 audit[3018]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.359000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4e32a670 a2=0 a3=7fff4e32a65c items=0 ppid=2916 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.359000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:43:04.360000 audit[3019]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.360000 audit[3019]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff291c00e0 a2=0 a3=7fff291c00cc items=0 ppid=2916 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:43:04.361000 audit[3020]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.361000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8ace4cb0 a2=0 a3=7fff8ace4c9c items=0 ppid=2916 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:43:04.465000 audit[3024]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.465000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd9f9daec0 a2=0 a3=7ffd9f9daeac items=0 ppid=2916 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.465000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:43:04.470000 audit[3026]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.470000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff1796d510 a2=0 a3=7fff1796d4fc items=0 ppid=2916 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.470000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 01:43:04.474000 audit[3029]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.474000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdb8c9b520 a2=0 a3=7ffdb8c9b50c items=0 ppid=2916 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 01:43:04.476000 audit[3030]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.476000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf072ccf0 a2=0 a3=7ffdf072ccdc items=0 ppid=2916 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.476000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:43:04.479000 audit[3032]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.479000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc15df4510 a2=0 a3=7ffc15df44fc items=0 ppid=2916 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.479000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:43:04.481000 audit[3033]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.481000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd794eabf0 a2=0 a3=7ffd794eabdc items=0 ppid=2916 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:43:04.484000 audit[3035]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.484000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffc8a52e80 a2=0 a3=7fffc8a52e6c items=0 ppid=2916 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.484000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:43:04.489000 audit[3038]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.489000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcbf3b2f10 a2=0 a3=7ffcbf3b2efc items=0 ppid=2916 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 01:43:04.490000 audit[3039]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.490000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccb808a50 a2=0 a3=7ffccb808a3c items=0 ppid=2916 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:43:04.493000 audit[3041]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.493000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe5cd32d40 a2=0 a3=7ffe5cd32d2c items=0 ppid=2916 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:43:04.495000 audit[3042]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.495000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0cbd0820 a2=0 a3=7fff0cbd080c items=0 ppid=2916 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.495000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:43:04.498000 audit[3044]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.498000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4918e850 a2=0 a3=7ffe4918e83c items=0 ppid=2916 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:43:04.502000 audit[3047]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.502000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde97e5a30 a2=0 a3=7ffde97e5a1c items=0 ppid=2916 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.502000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:43:04.507000 audit[3050]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.507000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcdc896ac0 a2=0 a3=7ffcdc896aac items=0 ppid=2916 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:43:04.508000 audit[3051]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.508000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffedf4fdf80 a2=0 a3=7ffedf4fdf6c items=0 ppid=2916 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.508000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:43:04.511000 audit[3053]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.511000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc7e7a9e50 a2=0 a3=7ffc7e7a9e3c items=0 ppid=2916 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.511000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:43:04.516000 audit[3056]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.516000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea436be60 a2=0 a3=7ffea436be4c items=0 ppid=2916 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:43:04.517000 audit[3057]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.517000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1fe50670 a2=0 a3=7ffd1fe5065c items=0 ppid=2916 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.517000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:43:04.520000 audit[3059]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:43:04.520000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc4e6d6180 a2=0 a3=7ffc4e6d616c items=0 ppid=2916 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.520000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:43:04.543000 audit[3065]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:04.543000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc3e6e79e0 a2=0 a3=7ffc3e6e79cc items=0 ppid=2916 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.543000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:04.556000 audit[3065]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:04.556000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc3e6e79e0 a2=0 a3=7ffc3e6e79cc items=0 ppid=2916 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.556000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:04.557000 audit[3070]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.557000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc92d40440 a2=0 a3=7ffc92d4042c items=0 ppid=2916 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.557000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:43:04.564000 audit[3072]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.564000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe124c2e70 a2=0 a3=7ffe124c2e5c items=0 ppid=2916 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.564000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 01:43:04.569000 audit[3075]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.569000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd3c7e54c0 a2=0 a3=7ffd3c7e54ac items=0 ppid=2916 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.569000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 01:43:04.570000 audit[3076]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.570000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff529c7610 a2=0 a3=7fff529c75fc items=0 ppid=2916 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.570000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:43:04.573000 audit[3078]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.573000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc4e4b44d0 a2=0 a3=7ffc4e4b44bc items=0 ppid=2916 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:43:04.575000 audit[3079]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.575000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc87f45dd0 a2=0 a3=7ffc87f45dbc items=0 ppid=2916 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.575000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:43:04.578000 audit[3081]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.578000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc5ab5b7d0 a2=0 a3=7ffc5ab5b7bc items=0 ppid=2916 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 01:43:04.583000 audit[3084]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.583000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff9790d5b0 a2=0 a3=7fff9790d59c items=0 ppid=2916 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:43:04.585000 audit[3085]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.585000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc13d8d7e0 a2=0 a3=7ffc13d8d7cc items=0 ppid=2916 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:43:04.586856 kubelet[2803]: E0114 01:43:04.586828 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:04.590000 audit[3087]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.590000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe71fba2a0 a2=0 a3=7ffe71fba28c items=0 ppid=2916 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:43:04.593000 audit[3088]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.593000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc63c15960 a2=0 a3=7ffc63c1594c items=0 ppid=2916 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.593000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:43:04.599000 audit[3090]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.599000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4b8dd790 a2=0 a3=7ffe4b8dd77c items=0 ppid=2916 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.599000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:43:04.605000 audit[3093]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.605000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc54522340 a2=0 a3=7ffc5452232c items=0 ppid=2916 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.605000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:43:04.611000 audit[3096]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.611000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe62023700 a2=0 a3=7ffe620236ec items=0 ppid=2916 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.611000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 01:43:04.612000 audit[3097]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.612000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff80694330 a2=0 a3=7fff8069431c items=0 ppid=2916 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.612000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:43:04.616000 audit[3099]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.616000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd8b593a80 a2=0 a3=7ffd8b593a6c items=0 ppid=2916 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.616000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:43:04.621000 audit[3102]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.621000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc09451040 a2=0 a3=7ffc0945102c items=0 ppid=2916 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.621000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:43:04.622000 audit[3103]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.622000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9dce86b0 a2=0 a3=7fff9dce869c items=0 ppid=2916 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.622000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:43:04.625000 audit[3105]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.625000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc2b15a800 a2=0 a3=7ffc2b15a7ec items=0 ppid=2916 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.625000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:43:04.627000 audit[3106]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.627000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff03b24a80 a2=0 a3=7fff03b24a6c items=0 ppid=2916 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.627000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:43:04.630000 audit[3108]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.630000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdcf181040 a2=0 a3=7ffdcf18102c items=0 ppid=2916 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.630000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:43:04.634000 audit[3111]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:43:04.634000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff8348cab0 a2=0 a3=7fff8348ca9c items=0 ppid=2916 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.634000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:43:04.639000 audit[3113]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:43:04.639000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffea4a5c9e0 a2=0 a3=7ffea4a5c9cc items=0 ppid=2916 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.639000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:04.639000 audit[3113]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:43:04.639000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffea4a5c9e0 a2=0 a3=7ffea4a5c9cc items=0 ppid=2916 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:04.639000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:05.430224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1113628113.mount: Deactivated successfully. Jan 14 01:43:06.063272 kubelet[2803]: E0114 01:43:06.062479 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:06.080587 kubelet[2803]: I0114 01:43:06.080512 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-csqr5" podStartSLOduration=3.080472994 podStartE2EDuration="3.080472994s" podCreationTimestamp="2026-01-14 01:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:43:04.599502545 +0000 UTC m=+6.169546556" watchObservedRunningTime="2026-01-14 01:43:06.080472994 +0000 UTC m=+7.650517005" Jan 14 01:43:06.232312 kubelet[2803]: E0114 01:43:06.232221 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:06.591148 kubelet[2803]: E0114 01:43:06.590556 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:06.591427 kubelet[2803]: E0114 01:43:06.591390 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:07.383503 containerd[1600]: time="2026-01-14T01:43:07.383190423Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:07.384809 containerd[1600]: time="2026-01-14T01:43:07.384630742Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 14 01:43:07.385513 containerd[1600]: time="2026-01-14T01:43:07.385468432Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:07.387552 containerd[1600]: time="2026-01-14T01:43:07.387506801Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:07.388375 containerd[1600]: time="2026-01-14T01:43:07.388335550Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.090961474s" Jan 14 01:43:07.388460 containerd[1600]: time="2026-01-14T01:43:07.388445200Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 01:43:07.393873 containerd[1600]: time="2026-01-14T01:43:07.393766897Z" level=info msg="CreateContainer within sandbox \"b2c47ddf30b3ae4f789f6adb88da58a281b3d9dc0bdb101ea921222f57b959e4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:43:07.416287 containerd[1600]: time="2026-01-14T01:43:07.413442088Z" level=info msg="Container 399843dcd98278cfbe90a12c638ec3ddc5791977649f43660ace324f313df65a: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:43:07.416565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423761647.mount: Deactivated successfully. Jan 14 01:43:07.423177 containerd[1600]: time="2026-01-14T01:43:07.423086283Z" level=info msg="CreateContainer within sandbox \"b2c47ddf30b3ae4f789f6adb88da58a281b3d9dc0bdb101ea921222f57b959e4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"399843dcd98278cfbe90a12c638ec3ddc5791977649f43660ace324f313df65a\"" Jan 14 01:43:07.425392 containerd[1600]: time="2026-01-14T01:43:07.424135392Z" level=info msg="StartContainer for \"399843dcd98278cfbe90a12c638ec3ddc5791977649f43660ace324f313df65a\"" Jan 14 01:43:07.426096 containerd[1600]: time="2026-01-14T01:43:07.426064461Z" level=info msg="connecting to shim 399843dcd98278cfbe90a12c638ec3ddc5791977649f43660ace324f313df65a" address="unix:///run/containerd/s/5f98226891e234b331f1823e32117175ddb79dc57ccea9ed5186aabf6eefe67c" protocol=ttrpc version=3 Jan 14 01:43:07.463513 systemd[1]: Started cri-containerd-399843dcd98278cfbe90a12c638ec3ddc5791977649f43660ace324f313df65a.scope - libcontainer container 399843dcd98278cfbe90a12c638ec3ddc5791977649f43660ace324f313df65a. Jan 14 01:43:07.484000 audit: BPF prog-id=158 op=LOAD Jan 14 01:43:07.485000 audit: BPF prog-id=159 op=LOAD Jan 14 01:43:07.485000 audit[3122]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2937 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:07.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339393834336463643938323738636662653930613132633633386563 Jan 14 01:43:07.485000 audit: BPF prog-id=159 op=UNLOAD Jan 14 01:43:07.485000 audit[3122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2937 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:07.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339393834336463643938323738636662653930613132633633386563 Jan 14 01:43:07.485000 audit: BPF prog-id=160 op=LOAD Jan 14 01:43:07.485000 audit[3122]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2937 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:07.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339393834336463643938323738636662653930613132633633386563 Jan 14 01:43:07.485000 audit: BPF prog-id=161 op=LOAD Jan 14 01:43:07.485000 audit[3122]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2937 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:07.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339393834336463643938323738636662653930613132633633386563 Jan 14 01:43:07.486000 audit: BPF prog-id=161 op=UNLOAD Jan 14 01:43:07.486000 audit[3122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2937 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:07.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339393834336463643938323738636662653930613132633633386563 Jan 14 01:43:07.486000 audit: BPF prog-id=160 op=UNLOAD Jan 14 01:43:07.486000 audit[3122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2937 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:07.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339393834336463643938323738636662653930613132633633386563 Jan 14 01:43:07.486000 audit: BPF prog-id=162 op=LOAD Jan 14 01:43:07.486000 audit[3122]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2937 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:07.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339393834336463643938323738636662653930613132633633386563 Jan 14 01:43:07.518942 containerd[1600]: time="2026-01-14T01:43:07.518861085Z" level=info msg="StartContainer for \"399843dcd98278cfbe90a12c638ec3ddc5791977649f43660ace324f313df65a\" returns successfully" Jan 14 01:43:07.595456 kubelet[2803]: E0114 01:43:07.595391 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:07.598363 kubelet[2803]: E0114 01:43:07.598307 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:07.616625 kubelet[2803]: I0114 01:43:07.616325 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-q52k7" podStartSLOduration=1.523078123 podStartE2EDuration="4.616290636s" podCreationTimestamp="2026-01-14 01:43:03 +0000 UTC" firstStartedPulling="2026-01-14 01:43:04.296669276 +0000 UTC m=+5.866713287" lastFinishedPulling="2026-01-14 01:43:07.389881789 +0000 UTC m=+8.959925800" observedRunningTime="2026-01-14 01:43:07.616043316 +0000 UTC m=+9.186087327" watchObservedRunningTime="2026-01-14 01:43:07.616290636 +0000 UTC m=+9.186334657" Jan 14 01:43:12.764099 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 01:43:12.764371 kernel: audit: type=1325 audit(1768354992.753:539): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:12.753000 audit[3184]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:12.753000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc44198530 a2=0 a3=7ffc4419851c items=0 ppid=2916 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:12.775292 kernel: audit: type=1300 audit(1768354992.753:539): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc44198530 a2=0 a3=7ffc4419851c items=0 ppid=2916 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:12.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:12.781271 kernel: audit: type=1327 audit(1768354992.753:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:12.775000 audit[3184]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:12.775000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc44198530 a2=0 a3=0 items=0 ppid=2916 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:12.792109 kernel: audit: type=1325 audit(1768354992.775:540): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:12.792179 kernel: audit: type=1300 audit(1768354992.775:540): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc44198530 a2=0 a3=0 items=0 ppid=2916 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:12.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:12.806308 kernel: audit: type=1327 audit(1768354992.775:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:12.813000 audit[3186]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:12.820272 kernel: audit: type=1325 audit(1768354992.813:541): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:12.813000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd8a3605a0 a2=0 a3=7ffd8a36058c items=0 ppid=2916 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:12.835261 kernel: audit: type=1300 audit(1768354992.813:541): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd8a3605a0 a2=0 a3=7ffd8a36058c items=0 ppid=2916 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:12.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:12.842264 kernel: audit: type=1327 audit(1768354992.813:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:12.821000 audit[3186]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:12.850300 kernel: audit: type=1325 audit(1768354992.821:542): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:12.821000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8a3605a0 a2=0 a3=0 items=0 ppid=2916 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:12.821000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:13.328320 sudo[1863]: pam_unix(sudo:session): session closed for user root Jan 14 01:43:13.328000 audit[1863]: USER_END pid=1863 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:43:13.328000 audit[1863]: CRED_DISP pid=1863 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:43:13.349370 sshd[1862]: Connection closed by 20.161.92.111 port 37388 Jan 14 01:43:13.350451 sshd-session[1858]: pam_unix(sshd:session): session closed for user core Jan 14 01:43:13.356000 audit[1858]: USER_END pid=1858 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:43:13.357000 audit[1858]: CRED_DISP pid=1858 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:43:13.362774 systemd[1]: sshd@6-172.239.193.229:22-20.161.92.111:37388.service: Deactivated successfully. Jan 14 01:43:13.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.239.193.229:22-20.161.92.111:37388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:43:13.371407 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:43:13.374928 systemd[1]: session-8.scope: Consumed 4.270s CPU time, 229.3M memory peak. Jan 14 01:43:13.378745 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:43:13.384342 systemd-logind[1577]: Removed session 8. Jan 14 01:43:13.605393 kubelet[2803]: E0114 01:43:13.605272 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:15.717117 update_engine[1578]: I20260114 01:43:15.716413 1578 update_attempter.cc:509] Updating boot flags... Jan 14 01:43:17.448000 audit[3233]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:17.448000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc6b205ed0 a2=0 a3=7ffc6b205ebc items=0 ppid=2916 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:17.448000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:17.454000 audit[3233]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:17.454000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6b205ed0 a2=0 a3=0 items=0 ppid=2916 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:17.454000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:17.476000 audit[3235]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:17.476000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd2f068fc0 a2=0 a3=7ffd2f068fac items=0 ppid=2916 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:17.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:17.480000 audit[3235]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:17.480000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd2f068fc0 a2=0 a3=0 items=0 ppid=2916 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:17.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:18.387118 systemd-timesyncd[1529]: Timed out waiting for reply from [2605:6400:488d:e1b2:84ba:ceab:2099:353]:123 (2.flatcar.pool.ntp.org). Jan 14 01:43:19.187359 systemd-resolved[1271]: Clock change detected. Flushing caches. Jan 14 01:43:19.188122 systemd-timesyncd[1529]: Contacted time server [2606:82c0:23::e]:123 (2.flatcar.pool.ntp.org). Jan 14 01:43:19.188978 systemd-timesyncd[1529]: Initial clock synchronization to Wed 2026-01-14 01:43:19.187229 UTC. Jan 14 01:43:19.306011 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 14 01:43:19.306117 kernel: audit: type=1325 audit(1768354999.277:552): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:19.277000 audit[3237]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:19.277000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd378848d0 a2=0 a3=7ffd378848bc items=0 ppid=2916 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:19.320442 kernel: audit: type=1300 audit(1768354999.277:552): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd378848d0 a2=0 a3=7ffd378848bc items=0 ppid=2916 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:19.326473 kernel: audit: type=1327 audit(1768354999.277:552): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:19.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:19.310000 audit[3237]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:19.310000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd378848d0 a2=0 a3=0 items=0 ppid=2916 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:19.332608 kernel: audit: type=1325 audit(1768354999.310:553): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:19.332661 kernel: audit: type=1300 audit(1768354999.310:553): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd378848d0 a2=0 a3=0 items=0 ppid=2916 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:19.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:19.339744 kernel: audit: type=1327 audit(1768354999.310:553): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:19.950748 systemd[1]: Created slice kubepods-besteffort-pod2cb8a8a4_6673_40df_b1c0_32e2993666fb.slice - libcontainer container kubepods-besteffort-pod2cb8a8a4_6673_40df_b1c0_32e2993666fb.slice. Jan 14 01:43:20.033202 systemd[1]: Created slice kubepods-besteffort-podfb2524cb_aded_4b82_afa9_c3877826a824.slice - libcontainer container kubepods-besteffort-podfb2524cb_aded_4b82_afa9_c3877826a824.slice. Jan 14 01:43:20.034116 kubelet[2803]: I0114 01:43:20.034086 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cb8a8a4-6673-40df-b1c0-32e2993666fb-tigera-ca-bundle\") pod \"calico-typha-6f9c479774-mrf2z\" (UID: \"2cb8a8a4-6673-40df-b1c0-32e2993666fb\") " pod="calico-system/calico-typha-6f9c479774-mrf2z" Jan 14 01:43:20.034833 kubelet[2803]: I0114 01:43:20.034121 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2cb8a8a4-6673-40df-b1c0-32e2993666fb-typha-certs\") pod \"calico-typha-6f9c479774-mrf2z\" (UID: \"2cb8a8a4-6673-40df-b1c0-32e2993666fb\") " pod="calico-system/calico-typha-6f9c479774-mrf2z" Jan 14 01:43:20.034833 kubelet[2803]: I0114 01:43:20.034138 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4rzb\" (UniqueName: \"kubernetes.io/projected/2cb8a8a4-6673-40df-b1c0-32e2993666fb-kube-api-access-f4rzb\") pod \"calico-typha-6f9c479774-mrf2z\" (UID: \"2cb8a8a4-6673-40df-b1c0-32e2993666fb\") " pod="calico-system/calico-typha-6f9c479774-mrf2z" Jan 14 01:43:20.135041 kubelet[2803]: I0114 01:43:20.134997 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fb2524cb-aded-4b82-afa9-c3877826a824-cni-net-dir\") pod \"calico-node-gb8gz\" (UID: \"fb2524cb-aded-4b82-afa9-c3877826a824\") " pod="calico-system/calico-node-gb8gz" Jan 14 01:43:20.135198 kubelet[2803]: I0114 01:43:20.135075 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb2524cb-aded-4b82-afa9-c3877826a824-lib-modules\") pod \"calico-node-gb8gz\" (UID: \"fb2524cb-aded-4b82-afa9-c3877826a824\") " pod="calico-system/calico-node-gb8gz" Jan 14 01:43:20.135198 kubelet[2803]: I0114 01:43:20.135109 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb2524cb-aded-4b82-afa9-c3877826a824-tigera-ca-bundle\") pod \"calico-node-gb8gz\" (UID: \"fb2524cb-aded-4b82-afa9-c3877826a824\") " pod="calico-system/calico-node-gb8gz" Jan 14 01:43:20.135198 kubelet[2803]: I0114 01:43:20.135129 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fb2524cb-aded-4b82-afa9-c3877826a824-var-run-calico\") pod \"calico-node-gb8gz\" (UID: \"fb2524cb-aded-4b82-afa9-c3877826a824\") " pod="calico-system/calico-node-gb8gz" Jan 14 01:43:20.135198 kubelet[2803]: I0114 01:43:20.135158 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fb2524cb-aded-4b82-afa9-c3877826a824-cni-log-dir\") pod \"calico-node-gb8gz\" (UID: \"fb2524cb-aded-4b82-afa9-c3877826a824\") " pod="calico-system/calico-node-gb8gz" Jan 14 01:43:20.135198 kubelet[2803]: I0114 01:43:20.135174 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncccw\" (UniqueName: \"kubernetes.io/projected/fb2524cb-aded-4b82-afa9-c3877826a824-kube-api-access-ncccw\") pod \"calico-node-gb8gz\" (UID: \"fb2524cb-aded-4b82-afa9-c3877826a824\") " pod="calico-system/calico-node-gb8gz" Jan 14 01:43:20.135534 kubelet[2803]: I0114 01:43:20.135194 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fb2524cb-aded-4b82-afa9-c3877826a824-flexvol-driver-host\") pod \"calico-node-gb8gz\" (UID: \"fb2524cb-aded-4b82-afa9-c3877826a824\") " pod="calico-system/calico-node-gb8gz" Jan 14 01:43:20.135534 kubelet[2803]: I0114 01:43:20.135211 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fb2524cb-aded-4b82-afa9-c3877826a824-var-lib-calico\") pod \"calico-node-gb8gz\" (UID: \"fb2524cb-aded-4b82-afa9-c3877826a824\") " pod="calico-system/calico-node-gb8gz" Jan 14 01:43:20.135534 kubelet[2803]: I0114 01:43:20.135441 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fb2524cb-aded-4b82-afa9-c3877826a824-cni-bin-dir\") pod \"calico-node-gb8gz\" (UID: \"fb2524cb-aded-4b82-afa9-c3877826a824\") " pod="calico-system/calico-node-gb8gz" Jan 14 01:43:20.135534 kubelet[2803]: I0114 01:43:20.135470 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fb2524cb-aded-4b82-afa9-c3877826a824-node-certs\") pod \"calico-node-gb8gz\" (UID: \"fb2524cb-aded-4b82-afa9-c3877826a824\") " pod="calico-system/calico-node-gb8gz" Jan 14 01:43:20.135534 kubelet[2803]: I0114 01:43:20.135485 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fb2524cb-aded-4b82-afa9-c3877826a824-policysync\") pod \"calico-node-gb8gz\" (UID: \"fb2524cb-aded-4b82-afa9-c3877826a824\") " pod="calico-system/calico-node-gb8gz" Jan 14 01:43:20.135642 kubelet[2803]: I0114 01:43:20.135500 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb2524cb-aded-4b82-afa9-c3877826a824-xtables-lock\") pod \"calico-node-gb8gz\" (UID: \"fb2524cb-aded-4b82-afa9-c3877826a824\") " pod="calico-system/calico-node-gb8gz" Jan 14 01:43:20.183637 kubelet[2803]: E0114 01:43:20.183302 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:43:20.237579 kubelet[2803]: I0114 01:43:20.237128 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5hz7\" (UniqueName: \"kubernetes.io/projected/27494ae0-0ad7-4d62-b447-69c7f55fa588-kube-api-access-c5hz7\") pod \"csi-node-driver-gg5g8\" (UID: \"27494ae0-0ad7-4d62-b447-69c7f55fa588\") " pod="calico-system/csi-node-driver-gg5g8" Jan 14 01:43:20.237579 kubelet[2803]: I0114 01:43:20.237216 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/27494ae0-0ad7-4d62-b447-69c7f55fa588-registration-dir\") pod \"csi-node-driver-gg5g8\" (UID: \"27494ae0-0ad7-4d62-b447-69c7f55fa588\") " pod="calico-system/csi-node-driver-gg5g8" Jan 14 01:43:20.237579 kubelet[2803]: I0114 01:43:20.237231 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/27494ae0-0ad7-4d62-b447-69c7f55fa588-varrun\") pod \"csi-node-driver-gg5g8\" (UID: \"27494ae0-0ad7-4d62-b447-69c7f55fa588\") " pod="calico-system/csi-node-driver-gg5g8" Jan 14 01:43:20.237579 kubelet[2803]: I0114 01:43:20.237263 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27494ae0-0ad7-4d62-b447-69c7f55fa588-kubelet-dir\") pod \"csi-node-driver-gg5g8\" (UID: \"27494ae0-0ad7-4d62-b447-69c7f55fa588\") " pod="calico-system/csi-node-driver-gg5g8" Jan 14 01:43:20.237579 kubelet[2803]: I0114 01:43:20.237285 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/27494ae0-0ad7-4d62-b447-69c7f55fa588-socket-dir\") pod \"csi-node-driver-gg5g8\" (UID: \"27494ae0-0ad7-4d62-b447-69c7f55fa588\") " pod="calico-system/csi-node-driver-gg5g8" Jan 14 01:43:20.246604 kubelet[2803]: E0114 01:43:20.246570 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.246779 kubelet[2803]: W0114 01:43:20.246719 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.246779 kubelet[2803]: E0114 01:43:20.246744 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.254573 kubelet[2803]: E0114 01:43:20.254547 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:20.254886 containerd[1600]: time="2026-01-14T01:43:20.254857248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f9c479774-mrf2z,Uid:2cb8a8a4-6673-40df-b1c0-32e2993666fb,Namespace:calico-system,Attempt:0,}" Jan 14 01:43:20.260486 kubelet[2803]: E0114 01:43:20.260456 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.260681 kubelet[2803]: W0114 01:43:20.260624 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.260681 kubelet[2803]: E0114 01:43:20.260650 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.280380 containerd[1600]: time="2026-01-14T01:43:20.279975115Z" level=info msg="connecting to shim b5c232ffd35e2948d056d1e2704ce80e345373360785ea67fda817d2a6933635" address="unix:///run/containerd/s/7ad75b904aa0e3a363a499dc97e84435aa96f69cebe046eb49f2a5a6e7b2e998" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:43:20.306575 systemd[1]: Started cri-containerd-b5c232ffd35e2948d056d1e2704ce80e345373360785ea67fda817d2a6933635.scope - libcontainer container b5c232ffd35e2948d056d1e2704ce80e345373360785ea67fda817d2a6933635. Jan 14 01:43:20.318000 audit: BPF prog-id=163 op=LOAD Jan 14 01:43:20.321578 kernel: audit: type=1334 audit(1768355000.318:554): prog-id=163 op=LOAD Jan 14 01:43:20.321000 audit: BPF prog-id=164 op=LOAD Jan 14 01:43:20.321000 audit[3265]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3254 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.325748 kernel: audit: type=1334 audit(1768355000.321:555): prog-id=164 op=LOAD Jan 14 01:43:20.325790 kernel: audit: type=1300 audit(1768355000.321:555): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3254 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235633233326666643335653239343864303536643165323730346365 Jan 14 01:43:20.333516 kernel: audit: type=1327 audit(1768355000.321:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235633233326666643335653239343864303536643165323730346365 Jan 14 01:43:20.338389 kubelet[2803]: E0114 01:43:20.338006 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:20.339451 containerd[1600]: time="2026-01-14T01:43:20.338956146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gb8gz,Uid:fb2524cb-aded-4b82-afa9-c3877826a824,Namespace:calico-system,Attempt:0,}" Jan 14 01:43:20.321000 audit: BPF prog-id=164 op=UNLOAD Jan 14 01:43:20.321000 audit[3265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235633233326666643335653239343864303536643165323730346365 Jan 14 01:43:20.323000 audit: BPF prog-id=165 op=LOAD Jan 14 01:43:20.323000 audit[3265]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3254 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235633233326666643335653239343864303536643165323730346365 Jan 14 01:43:20.323000 audit: BPF prog-id=166 op=LOAD Jan 14 01:43:20.342274 kubelet[2803]: E0114 01:43:20.340995 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.342274 kubelet[2803]: W0114 01:43:20.341774 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.342274 kubelet[2803]: E0114 01:43:20.342021 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.343269 kubelet[2803]: E0114 01:43:20.342976 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.343366 kubelet[2803]: W0114 01:43:20.343352 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.343495 kubelet[2803]: E0114 01:43:20.343481 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.323000 audit[3265]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3254 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235633233326666643335653239343864303536643165323730346365 Jan 14 01:43:20.323000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:43:20.323000 audit[3265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235633233326666643335653239343864303536643165323730346365 Jan 14 01:43:20.323000 audit: BPF prog-id=165 op=UNLOAD Jan 14 01:43:20.323000 audit[3265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235633233326666643335653239343864303536643165323730346365 Jan 14 01:43:20.323000 audit: BPF prog-id=167 op=LOAD Jan 14 01:43:20.323000 audit[3265]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3254 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235633233326666643335653239343864303536643165323730346365 Jan 14 01:43:20.345157 kubelet[2803]: E0114 01:43:20.344585 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.345157 kubelet[2803]: W0114 01:43:20.344626 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.345157 kubelet[2803]: E0114 01:43:20.344639 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.345000 audit[3285]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:20.345000 audit[3285]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc961a51b0 a2=0 a3=7ffc961a519c items=0 ppid=2916 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.345000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:20.346636 kubelet[2803]: E0114 01:43:20.345957 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.346636 kubelet[2803]: W0114 01:43:20.345966 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.346636 kubelet[2803]: E0114 01:43:20.346471 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.347142 kubelet[2803]: E0114 01:43:20.347128 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.347142 kubelet[2803]: W0114 01:43:20.347140 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.347213 kubelet[2803]: E0114 01:43:20.347150 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.348079 kubelet[2803]: E0114 01:43:20.348064 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.348079 kubelet[2803]: W0114 01:43:20.348077 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.348200 kubelet[2803]: E0114 01:43:20.348086 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.348961 kubelet[2803]: E0114 01:43:20.348938 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.349000 kubelet[2803]: W0114 01:43:20.348952 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.349000 kubelet[2803]: E0114 01:43:20.348985 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.348000 audit[3285]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:20.348000 audit[3285]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc961a51b0 a2=0 a3=0 items=0 ppid=2916 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:20.350501 kubelet[2803]: E0114 01:43:20.350473 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.350501 kubelet[2803]: W0114 01:43:20.350487 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.350501 kubelet[2803]: E0114 01:43:20.350496 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.350773 kubelet[2803]: E0114 01:43:20.350669 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.350773 kubelet[2803]: W0114 01:43:20.350677 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.350773 kubelet[2803]: E0114 01:43:20.350685 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.351032 kubelet[2803]: E0114 01:43:20.350839 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.351032 kubelet[2803]: W0114 01:43:20.350846 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.351032 kubelet[2803]: E0114 01:43:20.350853 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.351032 kubelet[2803]: E0114 01:43:20.351003 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.351032 kubelet[2803]: W0114 01:43:20.351011 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.351032 kubelet[2803]: E0114 01:43:20.351019 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.352373 kubelet[2803]: E0114 01:43:20.352353 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.352373 kubelet[2803]: W0114 01:43:20.352367 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.352455 kubelet[2803]: E0114 01:43:20.352377 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.353177 kubelet[2803]: E0114 01:43:20.353159 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.353177 kubelet[2803]: W0114 01:43:20.353170 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.353245 kubelet[2803]: E0114 01:43:20.353181 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.353911 kubelet[2803]: E0114 01:43:20.353576 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.353911 kubelet[2803]: W0114 01:43:20.353587 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.353911 kubelet[2803]: E0114 01:43:20.353596 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.353911 kubelet[2803]: E0114 01:43:20.353863 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.353911 kubelet[2803]: W0114 01:43:20.353870 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.353911 kubelet[2803]: E0114 01:43:20.353878 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.354338 kubelet[2803]: E0114 01:43:20.354319 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.354338 kubelet[2803]: W0114 01:43:20.354333 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.354396 kubelet[2803]: E0114 01:43:20.354341 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.354884 kubelet[2803]: E0114 01:43:20.354847 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.354884 kubelet[2803]: W0114 01:43:20.354858 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.354884 kubelet[2803]: E0114 01:43:20.354868 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.356314 kubelet[2803]: E0114 01:43:20.356013 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.356314 kubelet[2803]: W0114 01:43:20.356030 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.356314 kubelet[2803]: E0114 01:43:20.356040 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.356726 kubelet[2803]: E0114 01:43:20.356715 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.356896 kubelet[2803]: W0114 01:43:20.356851 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.356896 kubelet[2803]: E0114 01:43:20.356882 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.357530 kubelet[2803]: E0114 01:43:20.357517 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.357731 kubelet[2803]: W0114 01:43:20.357574 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.357731 kubelet[2803]: E0114 01:43:20.357586 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.358135 kubelet[2803]: E0114 01:43:20.358124 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.358585 kubelet[2803]: W0114 01:43:20.358177 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.358585 kubelet[2803]: E0114 01:43:20.358189 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.358713 kubelet[2803]: E0114 01:43:20.358702 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.358762 kubelet[2803]: W0114 01:43:20.358752 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.358830 kubelet[2803]: E0114 01:43:20.358793 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.359336 kubelet[2803]: E0114 01:43:20.359324 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.359462 kubelet[2803]: W0114 01:43:20.359436 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.359462 kubelet[2803]: E0114 01:43:20.359449 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.360388 kubelet[2803]: E0114 01:43:20.359948 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.360388 kubelet[2803]: W0114 01:43:20.360008 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.360388 kubelet[2803]: E0114 01:43:20.360018 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.362027 kubelet[2803]: E0114 01:43:20.361993 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.362246 kubelet[2803]: W0114 01:43:20.362112 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.362364 kubelet[2803]: E0114 01:43:20.362350 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.380462 kubelet[2803]: E0114 01:43:20.379916 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:20.380462 kubelet[2803]: W0114 01:43:20.379935 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:20.380462 kubelet[2803]: E0114 01:43:20.379950 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:20.387042 containerd[1600]: time="2026-01-14T01:43:20.386991072Z" level=info msg="connecting to shim ed3aee3a8010d799aab600fc59d4174343e9d8faa800abb66c2e4c13a336a551" address="unix:///run/containerd/s/4bca3fadefae110ff72c1baaed04aeb875def543d1a048ee79a9553fd2bec8d3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:43:20.391463 containerd[1600]: time="2026-01-14T01:43:20.391404370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f9c479774-mrf2z,Uid:2cb8a8a4-6673-40df-b1c0-32e2993666fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5c232ffd35e2948d056d1e2704ce80e345373360785ea67fda817d2a6933635\"" Jan 14 01:43:20.392225 kubelet[2803]: E0114 01:43:20.392210 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:20.396958 containerd[1600]: time="2026-01-14T01:43:20.396938237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:43:20.418609 systemd[1]: Started cri-containerd-ed3aee3a8010d799aab600fc59d4174343e9d8faa800abb66c2e4c13a336a551.scope - libcontainer container ed3aee3a8010d799aab600fc59d4174343e9d8faa800abb66c2e4c13a336a551. Jan 14 01:43:20.432000 audit: BPF prog-id=168 op=LOAD Jan 14 01:43:20.432000 audit: BPF prog-id=169 op=LOAD Jan 14 01:43:20.432000 audit[3339]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3322 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336165653361383031306437393961616236303066633539643431 Jan 14 01:43:20.432000 audit: BPF prog-id=169 op=UNLOAD Jan 14 01:43:20.432000 audit[3339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336165653361383031306437393961616236303066633539643431 Jan 14 01:43:20.432000 audit: BPF prog-id=170 op=LOAD Jan 14 01:43:20.432000 audit[3339]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3322 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336165653361383031306437393961616236303066633539643431 Jan 14 01:43:20.433000 audit: BPF prog-id=171 op=LOAD Jan 14 01:43:20.433000 audit[3339]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3322 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336165653361383031306437393961616236303066633539643431 Jan 14 01:43:20.433000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:43:20.433000 audit[3339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336165653361383031306437393961616236303066633539643431 Jan 14 01:43:20.433000 audit: BPF prog-id=170 op=UNLOAD Jan 14 01:43:20.433000 audit[3339]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336165653361383031306437393961616236303066633539643431 Jan 14 01:43:20.433000 audit: BPF prog-id=172 op=LOAD Jan 14 01:43:20.433000 audit[3339]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3322 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:20.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336165653361383031306437393961616236303066633539643431 Jan 14 01:43:20.451524 containerd[1600]: time="2026-01-14T01:43:20.451402960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gb8gz,Uid:fb2524cb-aded-4b82-afa9-c3877826a824,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed3aee3a8010d799aab600fc59d4174343e9d8faa800abb66c2e4c13a336a551\"" Jan 14 01:43:20.452051 kubelet[2803]: E0114 01:43:20.452031 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:21.330330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount899864139.mount: Deactivated successfully. Jan 14 01:43:21.877350 containerd[1600]: time="2026-01-14T01:43:21.876568597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:21.880534 containerd[1600]: time="2026-01-14T01:43:21.880252935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:21.882043 containerd[1600]: time="2026-01-14T01:43:21.881987624Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:21.883798 containerd[1600]: time="2026-01-14T01:43:21.883747823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:21.884638 containerd[1600]: time="2026-01-14T01:43:21.884411373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.487377256s" Jan 14 01:43:21.884638 containerd[1600]: time="2026-01-14T01:43:21.884471653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 01:43:21.886904 containerd[1600]: time="2026-01-14T01:43:21.886771022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:43:21.907759 containerd[1600]: time="2026-01-14T01:43:21.907709151Z" level=info msg="CreateContainer within sandbox \"b5c232ffd35e2948d056d1e2704ce80e345373360785ea67fda817d2a6933635\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:43:21.912585 containerd[1600]: time="2026-01-14T01:43:21.912554849Z" level=info msg="Container 6a11c6d9708cddd6928a1c52c112ac836552e7c9cba174e123b257cd95412a96: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:43:21.930629 containerd[1600]: time="2026-01-14T01:43:21.930581080Z" level=info msg="CreateContainer within sandbox \"b5c232ffd35e2948d056d1e2704ce80e345373360785ea67fda817d2a6933635\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6a11c6d9708cddd6928a1c52c112ac836552e7c9cba174e123b257cd95412a96\"" Jan 14 01:43:21.933441 containerd[1600]: time="2026-01-14T01:43:21.932657149Z" level=info msg="StartContainer for \"6a11c6d9708cddd6928a1c52c112ac836552e7c9cba174e123b257cd95412a96\"" Jan 14 01:43:21.934242 containerd[1600]: time="2026-01-14T01:43:21.934200898Z" level=info msg="connecting to shim 6a11c6d9708cddd6928a1c52c112ac836552e7c9cba174e123b257cd95412a96" address="unix:///run/containerd/s/7ad75b904aa0e3a363a499dc97e84435aa96f69cebe046eb49f2a5a6e7b2e998" protocol=ttrpc version=3 Jan 14 01:43:21.957570 systemd[1]: Started cri-containerd-6a11c6d9708cddd6928a1c52c112ac836552e7c9cba174e123b257cd95412a96.scope - libcontainer container 6a11c6d9708cddd6928a1c52c112ac836552e7c9cba174e123b257cd95412a96. Jan 14 01:43:21.979000 audit: BPF prog-id=173 op=LOAD Jan 14 01:43:21.980000 audit: BPF prog-id=174 op=LOAD Jan 14 01:43:21.980000 audit[3374]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3254 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:21.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661313163366439373038636464643639323861316335326331313261 Jan 14 01:43:21.980000 audit: BPF prog-id=174 op=UNLOAD Jan 14 01:43:21.980000 audit[3374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:21.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661313163366439373038636464643639323861316335326331313261 Jan 14 01:43:21.980000 audit: BPF prog-id=175 op=LOAD Jan 14 01:43:21.980000 audit[3374]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3254 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:21.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661313163366439373038636464643639323861316335326331313261 Jan 14 01:43:21.980000 audit: BPF prog-id=176 op=LOAD Jan 14 01:43:21.980000 audit[3374]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3254 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:21.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661313163366439373038636464643639323861316335326331313261 Jan 14 01:43:21.980000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:43:21.980000 audit[3374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:21.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661313163366439373038636464643639323861316335326331313261 Jan 14 01:43:21.980000 audit: BPF prog-id=175 op=UNLOAD Jan 14 01:43:21.980000 audit[3374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:21.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661313163366439373038636464643639323861316335326331313261 Jan 14 01:43:21.980000 audit: BPF prog-id=177 op=LOAD Jan 14 01:43:21.980000 audit[3374]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3254 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:21.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661313163366439373038636464643639323861316335326331313261 Jan 14 01:43:22.025518 containerd[1600]: time="2026-01-14T01:43:22.025455562Z" level=info msg="StartContainer for \"6a11c6d9708cddd6928a1c52c112ac836552e7c9cba174e123b257cd95412a96\" returns successfully" Jan 14 01:43:22.320027 kubelet[2803]: E0114 01:43:22.319964 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:43:22.423036 kubelet[2803]: E0114 01:43:22.422972 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:22.427069 kubelet[2803]: E0114 01:43:22.426961 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.427069 kubelet[2803]: W0114 01:43:22.426987 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.427069 kubelet[2803]: E0114 01:43:22.427008 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.427547 kubelet[2803]: E0114 01:43:22.427523 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.427547 kubelet[2803]: W0114 01:43:22.427539 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.427547 kubelet[2803]: E0114 01:43:22.427551 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.428937 kubelet[2803]: E0114 01:43:22.428847 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.428937 kubelet[2803]: W0114 01:43:22.428875 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.428937 kubelet[2803]: E0114 01:43:22.428892 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.430236 kubelet[2803]: E0114 01:43:22.429491 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.430236 kubelet[2803]: W0114 01:43:22.429510 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.430236 kubelet[2803]: E0114 01:43:22.429520 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.430236 kubelet[2803]: E0114 01:43:22.429752 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.430236 kubelet[2803]: W0114 01:43:22.429761 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.430236 kubelet[2803]: E0114 01:43:22.429771 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.430399 kubelet[2803]: E0114 01:43:22.430275 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.430399 kubelet[2803]: W0114 01:43:22.430284 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.430399 kubelet[2803]: E0114 01:43:22.430293 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.430593 kubelet[2803]: E0114 01:43:22.430508 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.430593 kubelet[2803]: W0114 01:43:22.430519 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.430593 kubelet[2803]: E0114 01:43:22.430526 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.430982 kubelet[2803]: E0114 01:43:22.430898 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.430982 kubelet[2803]: W0114 01:43:22.430912 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.430982 kubelet[2803]: E0114 01:43:22.430920 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.431670 kubelet[2803]: E0114 01:43:22.431642 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.431670 kubelet[2803]: W0114 01:43:22.431660 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.431670 kubelet[2803]: E0114 01:43:22.431670 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.432519 kubelet[2803]: E0114 01:43:22.432493 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.432519 kubelet[2803]: W0114 01:43:22.432510 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.432519 kubelet[2803]: E0114 01:43:22.432520 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.433318 kubelet[2803]: E0114 01:43:22.433287 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.433318 kubelet[2803]: W0114 01:43:22.433306 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.433318 kubelet[2803]: E0114 01:43:22.433315 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.433972 kubelet[2803]: E0114 01:43:22.433941 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.433972 kubelet[2803]: W0114 01:43:22.433961 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.433972 kubelet[2803]: E0114 01:43:22.433970 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.434309 kubelet[2803]: E0114 01:43:22.434226 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.434309 kubelet[2803]: W0114 01:43:22.434306 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.434370 kubelet[2803]: E0114 01:43:22.434317 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.436387 kubelet[2803]: E0114 01:43:22.436210 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.436387 kubelet[2803]: W0114 01:43:22.436226 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.436387 kubelet[2803]: E0114 01:43:22.436235 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.436497 kubelet[2803]: E0114 01:43:22.436399 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.436497 kubelet[2803]: W0114 01:43:22.436407 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.436497 kubelet[2803]: E0114 01:43:22.436449 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.467700 kubelet[2803]: E0114 01:43:22.467647 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.467700 kubelet[2803]: W0114 01:43:22.467695 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.467910 kubelet[2803]: E0114 01:43:22.467714 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.468732 kubelet[2803]: E0114 01:43:22.468698 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.468732 kubelet[2803]: W0114 01:43:22.468731 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.468916 kubelet[2803]: E0114 01:43:22.468745 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.470123 kubelet[2803]: E0114 01:43:22.470087 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.470123 kubelet[2803]: W0114 01:43:22.470102 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.470123 kubelet[2803]: E0114 01:43:22.470115 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.470487 kubelet[2803]: E0114 01:43:22.470455 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.470487 kubelet[2803]: W0114 01:43:22.470473 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.470487 kubelet[2803]: E0114 01:43:22.470483 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.471483 kubelet[2803]: E0114 01:43:22.471453 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.471483 kubelet[2803]: W0114 01:43:22.471471 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.471483 kubelet[2803]: E0114 01:43:22.471480 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.471970 kubelet[2803]: E0114 01:43:22.471946 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.471970 kubelet[2803]: W0114 01:43:22.471964 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.472076 kubelet[2803]: E0114 01:43:22.471974 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.473738 kubelet[2803]: E0114 01:43:22.473685 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.473738 kubelet[2803]: W0114 01:43:22.473701 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.474082 kubelet[2803]: E0114 01:43:22.474039 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.475599 kubelet[2803]: E0114 01:43:22.475476 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.475599 kubelet[2803]: W0114 01:43:22.475492 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.475599 kubelet[2803]: E0114 01:43:22.475502 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.478763 kubelet[2803]: E0114 01:43:22.478666 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.478763 kubelet[2803]: W0114 01:43:22.478685 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.478763 kubelet[2803]: E0114 01:43:22.478695 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.479635 kubelet[2803]: E0114 01:43:22.479591 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.479635 kubelet[2803]: W0114 01:43:22.479628 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.479635 kubelet[2803]: E0114 01:43:22.479638 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.480303 kubelet[2803]: E0114 01:43:22.480215 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.480303 kubelet[2803]: W0114 01:43:22.480229 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.480303 kubelet[2803]: E0114 01:43:22.480238 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.481380 kubelet[2803]: E0114 01:43:22.481357 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.481380 kubelet[2803]: W0114 01:43:22.481375 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.481573 kubelet[2803]: E0114 01:43:22.481386 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.482804 kubelet[2803]: E0114 01:43:22.482686 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.482804 kubelet[2803]: W0114 01:43:22.482722 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.482804 kubelet[2803]: E0114 01:43:22.482733 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.484403 kubelet[2803]: E0114 01:43:22.484373 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.484472 kubelet[2803]: W0114 01:43:22.484452 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.484472 kubelet[2803]: E0114 01:43:22.484465 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.485371 kubelet[2803]: E0114 01:43:22.485316 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.485371 kubelet[2803]: W0114 01:43:22.485351 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.485371 kubelet[2803]: E0114 01:43:22.485362 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.487011 kubelet[2803]: E0114 01:43:22.486990 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.487011 kubelet[2803]: W0114 01:43:22.487005 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.487436 kubelet[2803]: E0114 01:43:22.487243 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.488542 kubelet[2803]: E0114 01:43:22.488496 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.488542 kubelet[2803]: W0114 01:43:22.488517 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.488542 kubelet[2803]: E0114 01:43:22.488527 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.489255 kubelet[2803]: E0114 01:43:22.489020 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:43:22.489255 kubelet[2803]: W0114 01:43:22.489045 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:43:22.489255 kubelet[2803]: E0114 01:43:22.489075 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:43:22.546064 containerd[1600]: time="2026-01-14T01:43:22.546022492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:22.547546 containerd[1600]: time="2026-01-14T01:43:22.547522261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:22.548139 containerd[1600]: time="2026-01-14T01:43:22.548099991Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:22.551286 containerd[1600]: time="2026-01-14T01:43:22.551253269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:22.552209 containerd[1600]: time="2026-01-14T01:43:22.551934429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 665.106027ms" Jan 14 01:43:22.552706 containerd[1600]: time="2026-01-14T01:43:22.552658879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 01:43:22.562159 containerd[1600]: time="2026-01-14T01:43:22.562038634Z" level=info msg="CreateContainer within sandbox \"ed3aee3a8010d799aab600fc59d4174343e9d8faa800abb66c2e4c13a336a551\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:43:22.576529 containerd[1600]: time="2026-01-14T01:43:22.573003119Z" level=info msg="Container ed055863dddaf0e6edfe7d97a888724eb1d877e164ec85c2b8119fd66c954c20: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:43:22.581213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2477817489.mount: Deactivated successfully. Jan 14 01:43:22.589656 containerd[1600]: time="2026-01-14T01:43:22.589631410Z" level=info msg="CreateContainer within sandbox \"ed3aee3a8010d799aab600fc59d4174343e9d8faa800abb66c2e4c13a336a551\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ed055863dddaf0e6edfe7d97a888724eb1d877e164ec85c2b8119fd66c954c20\"" Jan 14 01:43:22.591322 containerd[1600]: time="2026-01-14T01:43:22.591281509Z" level=info msg="StartContainer for \"ed055863dddaf0e6edfe7d97a888724eb1d877e164ec85c2b8119fd66c954c20\"" Jan 14 01:43:22.595528 containerd[1600]: time="2026-01-14T01:43:22.595485517Z" level=info msg="connecting to shim ed055863dddaf0e6edfe7d97a888724eb1d877e164ec85c2b8119fd66c954c20" address="unix:///run/containerd/s/4bca3fadefae110ff72c1baaed04aeb875def543d1a048ee79a9553fd2bec8d3" protocol=ttrpc version=3 Jan 14 01:43:22.635376 systemd[1]: Started cri-containerd-ed055863dddaf0e6edfe7d97a888724eb1d877e164ec85c2b8119fd66c954c20.scope - libcontainer container ed055863dddaf0e6edfe7d97a888724eb1d877e164ec85c2b8119fd66c954c20. Jan 14 01:43:22.706000 audit: BPF prog-id=178 op=LOAD Jan 14 01:43:22.706000 audit[3450]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3322 pid=3450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:22.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564303535383633646464616630653665646665376439376138383837 Jan 14 01:43:22.706000 audit: BPF prog-id=179 op=LOAD Jan 14 01:43:22.706000 audit[3450]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3322 pid=3450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:22.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564303535383633646464616630653665646665376439376138383837 Jan 14 01:43:22.706000 audit: BPF prog-id=179 op=UNLOAD Jan 14 01:43:22.706000 audit[3450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:22.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564303535383633646464616630653665646665376439376138383837 Jan 14 01:43:22.706000 audit: BPF prog-id=178 op=UNLOAD Jan 14 01:43:22.706000 audit[3450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:22.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564303535383633646464616630653665646665376439376138383837 Jan 14 01:43:22.706000 audit: BPF prog-id=180 op=LOAD Jan 14 01:43:22.706000 audit[3450]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3322 pid=3450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:22.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564303535383633646464616630653665646665376439376138383837 Jan 14 01:43:22.743285 containerd[1600]: time="2026-01-14T01:43:22.743232933Z" level=info msg="StartContainer for \"ed055863dddaf0e6edfe7d97a888724eb1d877e164ec85c2b8119fd66c954c20\" returns successfully" Jan 14 01:43:22.762589 systemd[1]: cri-containerd-ed055863dddaf0e6edfe7d97a888724eb1d877e164ec85c2b8119fd66c954c20.scope: Deactivated successfully. Jan 14 01:43:22.766000 audit: BPF prog-id=180 op=UNLOAD Jan 14 01:43:22.769335 containerd[1600]: time="2026-01-14T01:43:22.769275950Z" level=info msg="received container exit event container_id:\"ed055863dddaf0e6edfe7d97a888724eb1d877e164ec85c2b8119fd66c954c20\" id:\"ed055863dddaf0e6edfe7d97a888724eb1d877e164ec85c2b8119fd66c954c20\" pid:3462 exited_at:{seconds:1768355002 nanos:768479531}" Jan 14 01:43:22.805370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed055863dddaf0e6edfe7d97a888724eb1d877e164ec85c2b8119fd66c954c20-rootfs.mount: Deactivated successfully. Jan 14 01:43:23.425070 kubelet[2803]: I0114 01:43:23.425034 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:43:23.425879 kubelet[2803]: E0114 01:43:23.425776 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:23.426819 containerd[1600]: time="2026-01-14T01:43:23.426735552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:43:23.432273 kubelet[2803]: E0114 01:43:23.426616 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:23.447274 kubelet[2803]: I0114 01:43:23.446662 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f9c479774-mrf2z" podStartSLOduration=2.956850847 podStartE2EDuration="4.446637542s" podCreationTimestamp="2026-01-14 01:43:19 +0000 UTC" firstStartedPulling="2026-01-14 01:43:20.396005117 +0000 UTC m=+21.188879424" lastFinishedPulling="2026-01-14 01:43:21.885791802 +0000 UTC m=+22.678666119" observedRunningTime="2026-01-14 01:43:22.477901326 +0000 UTC m=+23.270775633" watchObservedRunningTime="2026-01-14 01:43:23.446637542 +0000 UTC m=+24.239511869" Jan 14 01:43:24.319926 kubelet[2803]: E0114 01:43:24.319869 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:43:25.160569 containerd[1600]: time="2026-01-14T01:43:25.160526175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:25.161509 containerd[1600]: time="2026-01-14T01:43:25.161326244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 01:43:25.162002 containerd[1600]: time="2026-01-14T01:43:25.161969504Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:25.163599 containerd[1600]: time="2026-01-14T01:43:25.163562103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:25.164307 containerd[1600]: time="2026-01-14T01:43:25.164277823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.733063564s" Jan 14 01:43:25.164393 containerd[1600]: time="2026-01-14T01:43:25.164376453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 01:43:25.167459 containerd[1600]: time="2026-01-14T01:43:25.167408401Z" level=info msg="CreateContainer within sandbox \"ed3aee3a8010d799aab600fc59d4174343e9d8faa800abb66c2e4c13a336a551\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:43:25.175941 containerd[1600]: time="2026-01-14T01:43:25.175897757Z" level=info msg="Container e45a4db36d9a26999ef609d8ee9cf1ba608f126a0204d4d3858d148812b7acd4: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:43:25.179254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3979994238.mount: Deactivated successfully. Jan 14 01:43:25.190374 containerd[1600]: time="2026-01-14T01:43:25.190346330Z" level=info msg="CreateContainer within sandbox \"ed3aee3a8010d799aab600fc59d4174343e9d8faa800abb66c2e4c13a336a551\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e45a4db36d9a26999ef609d8ee9cf1ba608f126a0204d4d3858d148812b7acd4\"" Jan 14 01:43:25.191008 containerd[1600]: time="2026-01-14T01:43:25.190972109Z" level=info msg="StartContainer for \"e45a4db36d9a26999ef609d8ee9cf1ba608f126a0204d4d3858d148812b7acd4\"" Jan 14 01:43:25.193443 containerd[1600]: time="2026-01-14T01:43:25.193389498Z" level=info msg="connecting to shim e45a4db36d9a26999ef609d8ee9cf1ba608f126a0204d4d3858d148812b7acd4" address="unix:///run/containerd/s/4bca3fadefae110ff72c1baaed04aeb875def543d1a048ee79a9553fd2bec8d3" protocol=ttrpc version=3 Jan 14 01:43:25.216571 systemd[1]: Started cri-containerd-e45a4db36d9a26999ef609d8ee9cf1ba608f126a0204d4d3858d148812b7acd4.scope - libcontainer container e45a4db36d9a26999ef609d8ee9cf1ba608f126a0204d4d3858d148812b7acd4. Jan 14 01:43:25.266535 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 14 01:43:25.266635 kernel: audit: type=1334 audit(1768355005.262:586): prog-id=181 op=LOAD Jan 14 01:43:25.262000 audit: BPF prog-id=181 op=LOAD Jan 14 01:43:25.270457 kernel: audit: type=1300 audit(1768355005.262:586): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3322 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:25.262000 audit[3507]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3322 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:25.278449 kernel: audit: type=1327 audit(1768355005.262:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534356134646233366439613236393939656636303964386565396366 Jan 14 01:43:25.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534356134646233366439613236393939656636303964386565396366 Jan 14 01:43:25.262000 audit: BPF prog-id=182 op=LOAD Jan 14 01:43:25.285494 kernel: audit: type=1334 audit(1768355005.262:587): prog-id=182 op=LOAD Jan 14 01:43:25.262000 audit[3507]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3322 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:25.301350 kernel: audit: type=1300 audit(1768355005.262:587): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3322 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:25.301599 kernel: audit: type=1327 audit(1768355005.262:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534356134646233366439613236393939656636303964386565396366 Jan 14 01:43:25.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534356134646233366439613236393939656636303964386565396366 Jan 14 01:43:25.262000 audit: BPF prog-id=182 op=UNLOAD Jan 14 01:43:25.315005 kernel: audit: type=1334 audit(1768355005.262:588): prog-id=182 op=UNLOAD Jan 14 01:43:25.315093 kernel: audit: type=1300 audit(1768355005.262:588): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:25.262000 audit[3507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:25.322594 kernel: audit: type=1327 audit(1768355005.262:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534356134646233366439613236393939656636303964386565396366 Jan 14 01:43:25.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534356134646233366439613236393939656636303964386565396366 Jan 14 01:43:25.324732 kernel: audit: type=1334 audit(1768355005.262:589): prog-id=181 op=UNLOAD Jan 14 01:43:25.262000 audit: BPF prog-id=181 op=UNLOAD Jan 14 01:43:25.262000 audit[3507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:25.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534356134646233366439613236393939656636303964386565396366 Jan 14 01:43:25.262000 audit: BPF prog-id=183 op=LOAD Jan 14 01:43:25.262000 audit[3507]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3322 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:25.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534356134646233366439613236393939656636303964386565396366 Jan 14 01:43:25.331677 containerd[1600]: time="2026-01-14T01:43:25.331645799Z" level=info msg="StartContainer for \"e45a4db36d9a26999ef609d8ee9cf1ba608f126a0204d4d3858d148812b7acd4\" returns successfully" Jan 14 01:43:25.434209 kubelet[2803]: E0114 01:43:25.433824 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:25.846352 systemd[1]: cri-containerd-e45a4db36d9a26999ef609d8ee9cf1ba608f126a0204d4d3858d148812b7acd4.scope: Deactivated successfully. Jan 14 01:43:25.847553 systemd[1]: cri-containerd-e45a4db36d9a26999ef609d8ee9cf1ba608f126a0204d4d3858d148812b7acd4.scope: Consumed 580ms CPU time, 196.5M memory peak, 171.3M written to disk. Jan 14 01:43:25.848556 containerd[1600]: time="2026-01-14T01:43:25.848045491Z" level=info msg="received container exit event container_id:\"e45a4db36d9a26999ef609d8ee9cf1ba608f126a0204d4d3858d148812b7acd4\" id:\"e45a4db36d9a26999ef609d8ee9cf1ba608f126a0204d4d3858d148812b7acd4\" pid:3521 exited_at:{seconds:1768355005 nanos:846927801}" Jan 14 01:43:25.851000 audit: BPF prog-id=183 op=UNLOAD Jan 14 01:43:25.873005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e45a4db36d9a26999ef609d8ee9cf1ba608f126a0204d4d3858d148812b7acd4-rootfs.mount: Deactivated successfully. Jan 14 01:43:25.882809 kubelet[2803]: I0114 01:43:25.882703 2803 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 01:43:25.942903 systemd[1]: Created slice kubepods-besteffort-podf7be471a_cdad_47a9_a9b9_30003e7852fb.slice - libcontainer container kubepods-besteffort-podf7be471a_cdad_47a9_a9b9_30003e7852fb.slice. Jan 14 01:43:25.953980 systemd[1]: Created slice kubepods-besteffort-pod467c90a2_bf12_4a6d_a6a3_0bb4155d4e42.slice - libcontainer container kubepods-besteffort-pod467c90a2_bf12_4a6d_a6a3_0bb4155d4e42.slice. Jan 14 01:43:25.967926 systemd[1]: Created slice kubepods-burstable-podd646678c_86d1_495a_97d3_cd193380cb78.slice - libcontainer container kubepods-burstable-podd646678c_86d1_495a_97d3_cd193380cb78.slice. Jan 14 01:43:25.984931 systemd[1]: Created slice kubepods-besteffort-pod5131dab4_8de3_41fd_aa18_51b8b1928537.slice - libcontainer container kubepods-besteffort-pod5131dab4_8de3_41fd_aa18_51b8b1928537.slice. Jan 14 01:43:25.993333 systemd[1]: Created slice kubepods-burstable-podd97b54b4_c39b_4d54_a5a1_73190acb9e98.slice - libcontainer container kubepods-burstable-podd97b54b4_c39b_4d54_a5a1_73190acb9e98.slice. Jan 14 01:43:26.001492 systemd[1]: Created slice kubepods-besteffort-pod79093d5d_07cf_4a25_a816_7eeb844e241f.slice - libcontainer container kubepods-besteffort-pod79093d5d_07cf_4a25_a816_7eeb844e241f.slice. Jan 14 01:43:26.002306 kubelet[2803]: I0114 01:43:26.002213 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d646678c-86d1-495a-97d3-cd193380cb78-config-volume\") pod \"coredns-674b8bbfcf-n7n5w\" (UID: \"d646678c-86d1-495a-97d3-cd193380cb78\") " pod="kube-system/coredns-674b8bbfcf-n7n5w" Jan 14 01:43:26.002306 kubelet[2803]: I0114 01:43:26.002254 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lctvc\" (UniqueName: \"kubernetes.io/projected/f7be471a-cdad-47a9-a9b9-30003e7852fb-kube-api-access-lctvc\") pod \"whisker-5bf86c57d5-mw4rj\" (UID: \"f7be471a-cdad-47a9-a9b9-30003e7852fb\") " pod="calico-system/whisker-5bf86c57d5-mw4rj" Jan 14 01:43:26.002306 kubelet[2803]: I0114 01:43:26.002273 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/467c90a2-bf12-4a6d-a6a3-0bb4155d4e42-calico-apiserver-certs\") pod \"calico-apiserver-8b466d74c-r9454\" (UID: \"467c90a2-bf12-4a6d-a6a3-0bb4155d4e42\") " pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" Jan 14 01:43:26.002306 kubelet[2803]: I0114 01:43:26.002292 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg8lt\" (UniqueName: \"kubernetes.io/projected/467c90a2-bf12-4a6d-a6a3-0bb4155d4e42-kube-api-access-vg8lt\") pod \"calico-apiserver-8b466d74c-r9454\" (UID: \"467c90a2-bf12-4a6d-a6a3-0bb4155d4e42\") " pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" Jan 14 01:43:26.002306 kubelet[2803]: I0114 01:43:26.002306 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbts\" (UniqueName: \"kubernetes.io/projected/d646678c-86d1-495a-97d3-cd193380cb78-kube-api-access-nlbts\") pod \"coredns-674b8bbfcf-n7n5w\" (UID: \"d646678c-86d1-495a-97d3-cd193380cb78\") " pod="kube-system/coredns-674b8bbfcf-n7n5w" Jan 14 01:43:26.002541 kubelet[2803]: I0114 01:43:26.002321 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7be471a-cdad-47a9-a9b9-30003e7852fb-whisker-backend-key-pair\") pod \"whisker-5bf86c57d5-mw4rj\" (UID: \"f7be471a-cdad-47a9-a9b9-30003e7852fb\") " pod="calico-system/whisker-5bf86c57d5-mw4rj" Jan 14 01:43:26.002541 kubelet[2803]: I0114 01:43:26.002335 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7be471a-cdad-47a9-a9b9-30003e7852fb-whisker-ca-bundle\") pod \"whisker-5bf86c57d5-mw4rj\" (UID: \"f7be471a-cdad-47a9-a9b9-30003e7852fb\") " pod="calico-system/whisker-5bf86c57d5-mw4rj" Jan 14 01:43:26.008802 systemd[1]: Created slice kubepods-besteffort-pod10b6b02c_a804_4455_980f_c8e7b004f89d.slice - libcontainer container kubepods-besteffort-pod10b6b02c_a804_4455_980f_c8e7b004f89d.slice. Jan 14 01:43:26.103469 kubelet[2803]: I0114 01:43:26.103319 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79093d5d-07cf-4a25-a816-7eeb844e241f-config\") pod \"goldmane-666569f655-l58pb\" (UID: \"79093d5d-07cf-4a25-a816-7eeb844e241f\") " pod="calico-system/goldmane-666569f655-l58pb" Jan 14 01:43:26.103469 kubelet[2803]: I0114 01:43:26.103362 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knm28\" (UniqueName: \"kubernetes.io/projected/d97b54b4-c39b-4d54-a5a1-73190acb9e98-kube-api-access-knm28\") pod \"coredns-674b8bbfcf-4rxwz\" (UID: \"d97b54b4-c39b-4d54-a5a1-73190acb9e98\") " pod="kube-system/coredns-674b8bbfcf-4rxwz" Jan 14 01:43:26.103469 kubelet[2803]: I0114 01:43:26.103404 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pglf\" (UniqueName: \"kubernetes.io/projected/79093d5d-07cf-4a25-a816-7eeb844e241f-kube-api-access-8pglf\") pod \"goldmane-666569f655-l58pb\" (UID: \"79093d5d-07cf-4a25-a816-7eeb844e241f\") " pod="calico-system/goldmane-666569f655-l58pb" Jan 14 01:43:26.104041 kubelet[2803]: I0114 01:43:26.104008 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10b6b02c-a804-4455-980f-c8e7b004f89d-tigera-ca-bundle\") pod \"calico-kube-controllers-8597978bc7-qzzjk\" (UID: \"10b6b02c-a804-4455-980f-c8e7b004f89d\") " pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" Jan 14 01:43:26.106536 kubelet[2803]: I0114 01:43:26.104082 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/79093d5d-07cf-4a25-a816-7eeb844e241f-goldmane-key-pair\") pod \"goldmane-666569f655-l58pb\" (UID: \"79093d5d-07cf-4a25-a816-7eeb844e241f\") " pod="calico-system/goldmane-666569f655-l58pb" Jan 14 01:43:26.106536 kubelet[2803]: I0114 01:43:26.104104 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d97b54b4-c39b-4d54-a5a1-73190acb9e98-config-volume\") pod \"coredns-674b8bbfcf-4rxwz\" (UID: \"d97b54b4-c39b-4d54-a5a1-73190acb9e98\") " pod="kube-system/coredns-674b8bbfcf-4rxwz" Jan 14 01:43:26.106536 kubelet[2803]: I0114 01:43:26.104170 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5131dab4-8de3-41fd-aa18-51b8b1928537-calico-apiserver-certs\") pod \"calico-apiserver-8b466d74c-vftwx\" (UID: \"5131dab4-8de3-41fd-aa18-51b8b1928537\") " pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" Jan 14 01:43:26.106536 kubelet[2803]: I0114 01:43:26.104189 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgnbw\" (UniqueName: \"kubernetes.io/projected/5131dab4-8de3-41fd-aa18-51b8b1928537-kube-api-access-sgnbw\") pod \"calico-apiserver-8b466d74c-vftwx\" (UID: \"5131dab4-8de3-41fd-aa18-51b8b1928537\") " pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" Jan 14 01:43:26.106536 kubelet[2803]: I0114 01:43:26.104202 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79093d5d-07cf-4a25-a816-7eeb844e241f-goldmane-ca-bundle\") pod \"goldmane-666569f655-l58pb\" (UID: \"79093d5d-07cf-4a25-a816-7eeb844e241f\") " pod="calico-system/goldmane-666569f655-l58pb" Jan 14 01:43:26.106683 kubelet[2803]: I0114 01:43:26.104239 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd7n7\" (UniqueName: \"kubernetes.io/projected/10b6b02c-a804-4455-980f-c8e7b004f89d-kube-api-access-jd7n7\") pod \"calico-kube-controllers-8597978bc7-qzzjk\" (UID: \"10b6b02c-a804-4455-980f-c8e7b004f89d\") " pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" Jan 14 01:43:26.248646 containerd[1600]: time="2026-01-14T01:43:26.248600680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bf86c57d5-mw4rj,Uid:f7be471a-cdad-47a9-a9b9-30003e7852fb,Namespace:calico-system,Attempt:0,}" Jan 14 01:43:26.261911 containerd[1600]: time="2026-01-14T01:43:26.261850314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b466d74c-r9454,Uid:467c90a2-bf12-4a6d-a6a3-0bb4155d4e42,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:43:26.275690 kubelet[2803]: E0114 01:43:26.275402 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:26.277225 containerd[1600]: time="2026-01-14T01:43:26.276981896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n7n5w,Uid:d646678c-86d1-495a-97d3-cd193380cb78,Namespace:kube-system,Attempt:0,}" Jan 14 01:43:26.293581 containerd[1600]: time="2026-01-14T01:43:26.293526458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b466d74c-vftwx,Uid:5131dab4-8de3-41fd-aa18-51b8b1928537,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:43:26.298301 kubelet[2803]: E0114 01:43:26.298280 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:26.299181 containerd[1600]: time="2026-01-14T01:43:26.299150585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rxwz,Uid:d97b54b4-c39b-4d54-a5a1-73190acb9e98,Namespace:kube-system,Attempt:0,}" Jan 14 01:43:26.307728 containerd[1600]: time="2026-01-14T01:43:26.307572381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-l58pb,Uid:79093d5d-07cf-4a25-a816-7eeb844e241f,Namespace:calico-system,Attempt:0,}" Jan 14 01:43:26.317352 containerd[1600]: time="2026-01-14T01:43:26.317288386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8597978bc7-qzzjk,Uid:10b6b02c-a804-4455-980f-c8e7b004f89d,Namespace:calico-system,Attempt:0,}" Jan 14 01:43:26.328926 systemd[1]: Created slice kubepods-besteffort-pod27494ae0_0ad7_4d62_b447_69c7f55fa588.slice - libcontainer container kubepods-besteffort-pod27494ae0_0ad7_4d62_b447_69c7f55fa588.slice. Jan 14 01:43:26.333331 containerd[1600]: time="2026-01-14T01:43:26.333190558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gg5g8,Uid:27494ae0-0ad7-4d62-b447-69c7f55fa588,Namespace:calico-system,Attempt:0,}" Jan 14 01:43:26.399836 containerd[1600]: time="2026-01-14T01:43:26.399706365Z" level=error msg="Failed to destroy network for sandbox \"77c9d5e9c8dc3190a8089cec2e5f1515189a6b3955d28a96e894de2feecc71fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.403694 containerd[1600]: time="2026-01-14T01:43:26.403532283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bf86c57d5-mw4rj,Uid:f7be471a-cdad-47a9-a9b9-30003e7852fb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c9d5e9c8dc3190a8089cec2e5f1515189a6b3955d28a96e894de2feecc71fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.403858 kubelet[2803]: E0114 01:43:26.403793 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c9d5e9c8dc3190a8089cec2e5f1515189a6b3955d28a96e894de2feecc71fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.403939 kubelet[2803]: E0114 01:43:26.403884 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c9d5e9c8dc3190a8089cec2e5f1515189a6b3955d28a96e894de2feecc71fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bf86c57d5-mw4rj" Jan 14 01:43:26.403939 kubelet[2803]: E0114 01:43:26.403936 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c9d5e9c8dc3190a8089cec2e5f1515189a6b3955d28a96e894de2feecc71fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bf86c57d5-mw4rj" Jan 14 01:43:26.404043 kubelet[2803]: E0114 01:43:26.404010 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5bf86c57d5-mw4rj_calico-system(f7be471a-cdad-47a9-a9b9-30003e7852fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5bf86c57d5-mw4rj_calico-system(f7be471a-cdad-47a9-a9b9-30003e7852fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77c9d5e9c8dc3190a8089cec2e5f1515189a6b3955d28a96e894de2feecc71fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bf86c57d5-mw4rj" podUID="f7be471a-cdad-47a9-a9b9-30003e7852fb" Jan 14 01:43:26.429168 containerd[1600]: time="2026-01-14T01:43:26.429012650Z" level=error msg="Failed to destroy network for sandbox \"7e56f964cdc1076cab2d0b1c12657f8ec62c331d6a4a8991bd986db44b7a2cf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.432743 containerd[1600]: time="2026-01-14T01:43:26.432641368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n7n5w,Uid:d646678c-86d1-495a-97d3-cd193380cb78,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e56f964cdc1076cab2d0b1c12657f8ec62c331d6a4a8991bd986db44b7a2cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.432917 kubelet[2803]: E0114 01:43:26.432852 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e56f964cdc1076cab2d0b1c12657f8ec62c331d6a4a8991bd986db44b7a2cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.432917 kubelet[2803]: E0114 01:43:26.432904 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e56f964cdc1076cab2d0b1c12657f8ec62c331d6a4a8991bd986db44b7a2cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n7n5w" Jan 14 01:43:26.433153 kubelet[2803]: E0114 01:43:26.432926 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e56f964cdc1076cab2d0b1c12657f8ec62c331d6a4a8991bd986db44b7a2cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n7n5w" Jan 14 01:43:26.433153 kubelet[2803]: E0114 01:43:26.432969 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n7n5w_kube-system(d646678c-86d1-495a-97d3-cd193380cb78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n7n5w_kube-system(d646678c-86d1-495a-97d3-cd193380cb78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e56f964cdc1076cab2d0b1c12657f8ec62c331d6a4a8991bd986db44b7a2cf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n7n5w" podUID="d646678c-86d1-495a-97d3-cd193380cb78" Jan 14 01:43:26.449531 kubelet[2803]: E0114 01:43:26.449489 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:26.451947 containerd[1600]: time="2026-01-14T01:43:26.451906959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:43:26.459995 containerd[1600]: time="2026-01-14T01:43:26.459959875Z" level=error msg="Failed to destroy network for sandbox \"8ec72278e2031a0cf0a3ff625546342d90af55df4c0776f16d24dd436e7b26ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.462753 containerd[1600]: time="2026-01-14T01:43:26.462682543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b466d74c-r9454,Uid:467c90a2-bf12-4a6d-a6a3-0bb4155d4e42,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec72278e2031a0cf0a3ff625546342d90af55df4c0776f16d24dd436e7b26ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.463118 kubelet[2803]: E0114 01:43:26.463024 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec72278e2031a0cf0a3ff625546342d90af55df4c0776f16d24dd436e7b26ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.463118 kubelet[2803]: E0114 01:43:26.463067 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec72278e2031a0cf0a3ff625546342d90af55df4c0776f16d24dd436e7b26ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" Jan 14 01:43:26.463118 kubelet[2803]: E0114 01:43:26.463086 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec72278e2031a0cf0a3ff625546342d90af55df4c0776f16d24dd436e7b26ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" Jan 14 01:43:26.463255 kubelet[2803]: E0114 01:43:26.463125 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8b466d74c-r9454_calico-apiserver(467c90a2-bf12-4a6d-a6a3-0bb4155d4e42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8b466d74c-r9454_calico-apiserver(467c90a2-bf12-4a6d-a6a3-0bb4155d4e42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ec72278e2031a0cf0a3ff625546342d90af55df4c0776f16d24dd436e7b26ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:43:26.491113 containerd[1600]: time="2026-01-14T01:43:26.491076439Z" level=error msg="Failed to destroy network for sandbox \"ee44bc03f6c7fca5de59eacbea1d6c330e637a97b6c6da8077068f06e518df70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.493755 containerd[1600]: time="2026-01-14T01:43:26.493627048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-l58pb,Uid:79093d5d-07cf-4a25-a816-7eeb844e241f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee44bc03f6c7fca5de59eacbea1d6c330e637a97b6c6da8077068f06e518df70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.493866 kubelet[2803]: E0114 01:43:26.493818 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee44bc03f6c7fca5de59eacbea1d6c330e637a97b6c6da8077068f06e518df70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.493917 kubelet[2803]: E0114 01:43:26.493867 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee44bc03f6c7fca5de59eacbea1d6c330e637a97b6c6da8077068f06e518df70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-l58pb" Jan 14 01:43:26.493917 kubelet[2803]: E0114 01:43:26.493886 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee44bc03f6c7fca5de59eacbea1d6c330e637a97b6c6da8077068f06e518df70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-l58pb" Jan 14 01:43:26.493972 kubelet[2803]: E0114 01:43:26.493925 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-l58pb_calico-system(79093d5d-07cf-4a25-a816-7eeb844e241f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-l58pb_calico-system(79093d5d-07cf-4a25-a816-7eeb844e241f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee44bc03f6c7fca5de59eacbea1d6c330e637a97b6c6da8077068f06e518df70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:43:26.518911 containerd[1600]: time="2026-01-14T01:43:26.518762125Z" level=error msg="Failed to destroy network for sandbox \"fe7927ab4a48e9af0c0e0c79d10f8a8fb9422f5e753f8b69b36c754860d0b5fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.520934 containerd[1600]: time="2026-01-14T01:43:26.520898224Z" level=error msg="Failed to destroy network for sandbox \"cfd714615e4003695ac1bf5aa759dbfbc2b6e808970c9ef62bfd36d4b0108b7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.522395 containerd[1600]: time="2026-01-14T01:43:26.522360793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b466d74c-vftwx,Uid:5131dab4-8de3-41fd-aa18-51b8b1928537,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7927ab4a48e9af0c0e0c79d10f8a8fb9422f5e753f8b69b36c754860d0b5fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.522908 kubelet[2803]: E0114 01:43:26.522853 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7927ab4a48e9af0c0e0c79d10f8a8fb9422f5e753f8b69b36c754860d0b5fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.522967 kubelet[2803]: E0114 01:43:26.522907 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7927ab4a48e9af0c0e0c79d10f8a8fb9422f5e753f8b69b36c754860d0b5fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" Jan 14 01:43:26.522967 kubelet[2803]: E0114 01:43:26.522930 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7927ab4a48e9af0c0e0c79d10f8a8fb9422f5e753f8b69b36c754860d0b5fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" Jan 14 01:43:26.523030 kubelet[2803]: E0114 01:43:26.522972 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8b466d74c-vftwx_calico-apiserver(5131dab4-8de3-41fd-aa18-51b8b1928537)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8b466d74c-vftwx_calico-apiserver(5131dab4-8de3-41fd-aa18-51b8b1928537)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe7927ab4a48e9af0c0e0c79d10f8a8fb9422f5e753f8b69b36c754860d0b5fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:43:26.525006 containerd[1600]: time="2026-01-14T01:43:26.524961272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rxwz,Uid:d97b54b4-c39b-4d54-a5a1-73190acb9e98,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd714615e4003695ac1bf5aa759dbfbc2b6e808970c9ef62bfd36d4b0108b7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.525525 kubelet[2803]: E0114 01:43:26.525321 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd714615e4003695ac1bf5aa759dbfbc2b6e808970c9ef62bfd36d4b0108b7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.528561 kubelet[2803]: E0114 01:43:26.528517 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd714615e4003695ac1bf5aa759dbfbc2b6e808970c9ef62bfd36d4b0108b7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4rxwz" Jan 14 01:43:26.528561 kubelet[2803]: E0114 01:43:26.528550 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd714615e4003695ac1bf5aa759dbfbc2b6e808970c9ef62bfd36d4b0108b7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4rxwz" Jan 14 01:43:26.528830 kubelet[2803]: E0114 01:43:26.528636 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4rxwz_kube-system(d97b54b4-c39b-4d54-a5a1-73190acb9e98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4rxwz_kube-system(d97b54b4-c39b-4d54-a5a1-73190acb9e98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfd714615e4003695ac1bf5aa759dbfbc2b6e808970c9ef62bfd36d4b0108b7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4rxwz" podUID="d97b54b4-c39b-4d54-a5a1-73190acb9e98" Jan 14 01:43:26.533323 containerd[1600]: time="2026-01-14T01:43:26.533221188Z" level=error msg="Failed to destroy network for sandbox \"a98d7a177ca35201c47b5ee054d67914aab02558afd4c04a5bc1ab08f32a8f65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.534652 containerd[1600]: time="2026-01-14T01:43:26.534607327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8597978bc7-qzzjk,Uid:10b6b02c-a804-4455-980f-c8e7b004f89d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a98d7a177ca35201c47b5ee054d67914aab02558afd4c04a5bc1ab08f32a8f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.534847 kubelet[2803]: E0114 01:43:26.534805 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a98d7a177ca35201c47b5ee054d67914aab02558afd4c04a5bc1ab08f32a8f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.534895 kubelet[2803]: E0114 01:43:26.534851 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a98d7a177ca35201c47b5ee054d67914aab02558afd4c04a5bc1ab08f32a8f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" Jan 14 01:43:26.534895 kubelet[2803]: E0114 01:43:26.534868 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a98d7a177ca35201c47b5ee054d67914aab02558afd4c04a5bc1ab08f32a8f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" Jan 14 01:43:26.534951 kubelet[2803]: E0114 01:43:26.534902 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8597978bc7-qzzjk_calico-system(10b6b02c-a804-4455-980f-c8e7b004f89d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8597978bc7-qzzjk_calico-system(10b6b02c-a804-4455-980f-c8e7b004f89d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a98d7a177ca35201c47b5ee054d67914aab02558afd4c04a5bc1ab08f32a8f65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:43:26.546075 containerd[1600]: time="2026-01-14T01:43:26.546020962Z" level=error msg="Failed to destroy network for sandbox \"bcf4c2df898c9a57df6f5aad0e5b3ea87fbb0e7c7e07c23110897fd215d4dca8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.547488 containerd[1600]: time="2026-01-14T01:43:26.547456831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gg5g8,Uid:27494ae0-0ad7-4d62-b447-69c7f55fa588,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcf4c2df898c9a57df6f5aad0e5b3ea87fbb0e7c7e07c23110897fd215d4dca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.547689 kubelet[2803]: E0114 01:43:26.547638 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcf4c2df898c9a57df6f5aad0e5b3ea87fbb0e7c7e07c23110897fd215d4dca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:43:26.547740 kubelet[2803]: E0114 01:43:26.547701 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcf4c2df898c9a57df6f5aad0e5b3ea87fbb0e7c7e07c23110897fd215d4dca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gg5g8" Jan 14 01:43:26.547740 kubelet[2803]: E0114 01:43:26.547720 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcf4c2df898c9a57df6f5aad0e5b3ea87fbb0e7c7e07c23110897fd215d4dca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gg5g8" Jan 14 01:43:26.547815 kubelet[2803]: E0114 01:43:26.547785 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gg5g8_calico-system(27494ae0-0ad7-4d62-b447-69c7f55fa588)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gg5g8_calico-system(27494ae0-0ad7-4d62-b447-69c7f55fa588)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcf4c2df898c9a57df6f5aad0e5b3ea87fbb0e7c7e07c23110897fd215d4dca8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:43:30.006261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885423098.mount: Deactivated successfully. Jan 14 01:43:30.028992 containerd[1600]: time="2026-01-14T01:43:30.028953050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:30.029853 containerd[1600]: time="2026-01-14T01:43:30.029825599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 01:43:30.030519 containerd[1600]: time="2026-01-14T01:43:30.030475939Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:30.031980 containerd[1600]: time="2026-01-14T01:43:30.031942188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:43:30.032453 containerd[1600]: time="2026-01-14T01:43:30.032281708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.580334319s" Jan 14 01:43:30.032453 containerd[1600]: time="2026-01-14T01:43:30.032309198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 01:43:30.054620 containerd[1600]: time="2026-01-14T01:43:30.053773617Z" level=info msg="CreateContainer within sandbox \"ed3aee3a8010d799aab600fc59d4174343e9d8faa800abb66c2e4c13a336a551\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:43:30.078181 containerd[1600]: time="2026-01-14T01:43:30.078143665Z" level=info msg="Container ad4d2913d6ef53e141dc07f6cf1f1e877bd1c0e3f14bee984f09f9568f0de7b7: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:43:30.081442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291337078.mount: Deactivated successfully. Jan 14 01:43:30.087505 containerd[1600]: time="2026-01-14T01:43:30.087406131Z" level=info msg="CreateContainer within sandbox \"ed3aee3a8010d799aab600fc59d4174343e9d8faa800abb66c2e4c13a336a551\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ad4d2913d6ef53e141dc07f6cf1f1e877bd1c0e3f14bee984f09f9568f0de7b7\"" Jan 14 01:43:30.088607 containerd[1600]: time="2026-01-14T01:43:30.088559930Z" level=info msg="StartContainer for \"ad4d2913d6ef53e141dc07f6cf1f1e877bd1c0e3f14bee984f09f9568f0de7b7\"" Jan 14 01:43:30.090106 containerd[1600]: time="2026-01-14T01:43:30.090085269Z" level=info msg="connecting to shim ad4d2913d6ef53e141dc07f6cf1f1e877bd1c0e3f14bee984f09f9568f0de7b7" address="unix:///run/containerd/s/4bca3fadefae110ff72c1baaed04aeb875def543d1a048ee79a9553fd2bec8d3" protocol=ttrpc version=3 Jan 14 01:43:30.139729 systemd[1]: Started cri-containerd-ad4d2913d6ef53e141dc07f6cf1f1e877bd1c0e3f14bee984f09f9568f0de7b7.scope - libcontainer container ad4d2913d6ef53e141dc07f6cf1f1e877bd1c0e3f14bee984f09f9568f0de7b7. Jan 14 01:43:30.196000 audit: BPF prog-id=184 op=LOAD Jan 14 01:43:30.196000 audit[3776]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3322 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:30.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164346432393133643665663533653134316463303766366366316631 Jan 14 01:43:30.196000 audit: BPF prog-id=185 op=LOAD Jan 14 01:43:30.196000 audit[3776]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3322 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:30.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164346432393133643665663533653134316463303766366366316631 Jan 14 01:43:30.196000 audit: BPF prog-id=185 op=UNLOAD Jan 14 01:43:30.196000 audit[3776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:30.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164346432393133643665663533653134316463303766366366316631 Jan 14 01:43:30.196000 audit: BPF prog-id=184 op=UNLOAD Jan 14 01:43:30.196000 audit[3776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:30.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164346432393133643665663533653134316463303766366366316631 Jan 14 01:43:30.196000 audit: BPF prog-id=186 op=LOAD Jan 14 01:43:30.196000 audit[3776]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3322 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:30.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164346432393133643665663533653134316463303766366366316631 Jan 14 01:43:30.227165 containerd[1600]: time="2026-01-14T01:43:30.226666521Z" level=info msg="StartContainer for \"ad4d2913d6ef53e141dc07f6cf1f1e877bd1c0e3f14bee984f09f9568f0de7b7\" returns successfully" Jan 14 01:43:30.317498 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:43:30.317605 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:43:30.478206 kubelet[2803]: E0114 01:43:30.477769 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:30.491527 kubelet[2803]: I0114 01:43:30.491472 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gb8gz" podStartSLOduration=0.911379239 podStartE2EDuration="10.491459808s" podCreationTimestamp="2026-01-14 01:43:20 +0000 UTC" firstStartedPulling="2026-01-14 01:43:20.452874969 +0000 UTC m=+21.245749276" lastFinishedPulling="2026-01-14 01:43:30.032955538 +0000 UTC m=+30.825829845" observedRunningTime="2026-01-14 01:43:30.489900689 +0000 UTC m=+31.282774996" watchObservedRunningTime="2026-01-14 01:43:30.491459808 +0000 UTC m=+31.284334125" Jan 14 01:43:30.532989 kubelet[2803]: I0114 01:43:30.532959 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7be471a-cdad-47a9-a9b9-30003e7852fb-whisker-ca-bundle\") pod \"f7be471a-cdad-47a9-a9b9-30003e7852fb\" (UID: \"f7be471a-cdad-47a9-a9b9-30003e7852fb\") " Jan 14 01:43:30.534075 kubelet[2803]: I0114 01:43:30.533466 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lctvc\" (UniqueName: \"kubernetes.io/projected/f7be471a-cdad-47a9-a9b9-30003e7852fb-kube-api-access-lctvc\") pod \"f7be471a-cdad-47a9-a9b9-30003e7852fb\" (UID: \"f7be471a-cdad-47a9-a9b9-30003e7852fb\") " Jan 14 01:43:30.534075 kubelet[2803]: I0114 01:43:30.533496 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7be471a-cdad-47a9-a9b9-30003e7852fb-whisker-backend-key-pair\") pod \"f7be471a-cdad-47a9-a9b9-30003e7852fb\" (UID: \"f7be471a-cdad-47a9-a9b9-30003e7852fb\") " Jan 14 01:43:30.534528 kubelet[2803]: I0114 01:43:30.534504 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7be471a-cdad-47a9-a9b9-30003e7852fb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f7be471a-cdad-47a9-a9b9-30003e7852fb" (UID: "f7be471a-cdad-47a9-a9b9-30003e7852fb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:43:30.538109 kubelet[2803]: I0114 01:43:30.538083 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7be471a-cdad-47a9-a9b9-30003e7852fb-kube-api-access-lctvc" (OuterVolumeSpecName: "kube-api-access-lctvc") pod "f7be471a-cdad-47a9-a9b9-30003e7852fb" (UID: "f7be471a-cdad-47a9-a9b9-30003e7852fb"). InnerVolumeSpecName "kube-api-access-lctvc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:43:30.538498 kubelet[2803]: I0114 01:43:30.538480 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7be471a-cdad-47a9-a9b9-30003e7852fb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f7be471a-cdad-47a9-a9b9-30003e7852fb" (UID: "f7be471a-cdad-47a9-a9b9-30003e7852fb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:43:30.634073 kubelet[2803]: I0114 01:43:30.634038 2803 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7be471a-cdad-47a9-a9b9-30003e7852fb-whisker-ca-bundle\") on node \"172-239-193-229\" DevicePath \"\"" Jan 14 01:43:30.634073 kubelet[2803]: I0114 01:43:30.634068 2803 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lctvc\" (UniqueName: \"kubernetes.io/projected/f7be471a-cdad-47a9-a9b9-30003e7852fb-kube-api-access-lctvc\") on node \"172-239-193-229\" DevicePath \"\"" Jan 14 01:43:30.634073 kubelet[2803]: I0114 01:43:30.634079 2803 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7be471a-cdad-47a9-a9b9-30003e7852fb-whisker-backend-key-pair\") on node \"172-239-193-229\" DevicePath \"\"" Jan 14 01:43:30.783243 systemd[1]: Removed slice kubepods-besteffort-podf7be471a_cdad_47a9_a9b9_30003e7852fb.slice - libcontainer container kubepods-besteffort-podf7be471a_cdad_47a9_a9b9_30003e7852fb.slice. Jan 14 01:43:30.836407 systemd[1]: Created slice kubepods-besteffort-pod587711a7_ed5a_468c_b6b8_7056f146431a.slice - libcontainer container kubepods-besteffort-pod587711a7_ed5a_468c_b6b8_7056f146431a.slice. Jan 14 01:43:30.936272 kubelet[2803]: I0114 01:43:30.936224 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/587711a7-ed5a-468c-b6b8-7056f146431a-whisker-ca-bundle\") pod \"whisker-79c4f8b6b9-9knmv\" (UID: \"587711a7-ed5a-468c-b6b8-7056f146431a\") " pod="calico-system/whisker-79c4f8b6b9-9knmv" Jan 14 01:43:30.936272 kubelet[2803]: I0114 01:43:30.936275 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4dfp\" (UniqueName: \"kubernetes.io/projected/587711a7-ed5a-468c-b6b8-7056f146431a-kube-api-access-j4dfp\") pod \"whisker-79c4f8b6b9-9knmv\" (UID: \"587711a7-ed5a-468c-b6b8-7056f146431a\") " pod="calico-system/whisker-79c4f8b6b9-9knmv" Jan 14 01:43:30.936497 kubelet[2803]: I0114 01:43:30.936297 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/587711a7-ed5a-468c-b6b8-7056f146431a-whisker-backend-key-pair\") pod \"whisker-79c4f8b6b9-9knmv\" (UID: \"587711a7-ed5a-468c-b6b8-7056f146431a\") " pod="calico-system/whisker-79c4f8b6b9-9knmv" Jan 14 01:43:31.007493 systemd[1]: var-lib-kubelet-pods-f7be471a\x2dcdad\x2d47a9\x2da9b9\x2d30003e7852fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlctvc.mount: Deactivated successfully. Jan 14 01:43:31.007917 systemd[1]: var-lib-kubelet-pods-f7be471a\x2dcdad\x2d47a9\x2da9b9\x2d30003e7852fb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:43:31.141867 containerd[1600]: time="2026-01-14T01:43:31.141748453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c4f8b6b9-9knmv,Uid:587711a7-ed5a-468c-b6b8-7056f146431a,Namespace:calico-system,Attempt:0,}" Jan 14 01:43:31.275948 systemd-networkd[1502]: calic3e0813c412: Link UP Jan 14 01:43:31.276773 systemd-networkd[1502]: calic3e0813c412: Gained carrier Jan 14 01:43:31.293297 containerd[1600]: 2026-01-14 01:43:31.167 [INFO][3842] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:43:31.293297 containerd[1600]: 2026-01-14 01:43:31.202 [INFO][3842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0 whisker-79c4f8b6b9- calico-system 587711a7-ed5a-468c-b6b8-7056f146431a 898 0 2026-01-14 01:43:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79c4f8b6b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-193-229 whisker-79c4f8b6b9-9knmv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic3e0813c412 [] [] }} ContainerID="3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" Namespace="calico-system" Pod="whisker-79c4f8b6b9-9knmv" WorkloadEndpoint="172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-" Jan 14 01:43:31.293297 containerd[1600]: 2026-01-14 01:43:31.202 [INFO][3842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" Namespace="calico-system" Pod="whisker-79c4f8b6b9-9knmv" WorkloadEndpoint="172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0" Jan 14 01:43:31.293297 containerd[1600]: 2026-01-14 01:43:31.226 [INFO][3854] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" HandleID="k8s-pod-network.3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" Workload="172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0" Jan 14 01:43:31.293734 containerd[1600]: 2026-01-14 01:43:31.226 [INFO][3854] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" HandleID="k8s-pod-network.3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" Workload="172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-229", "pod":"whisker-79c4f8b6b9-9knmv", "timestamp":"2026-01-14 01:43:31.225999511 +0000 UTC"}, Hostname:"172-239-193-229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:43:31.293734 containerd[1600]: 2026-01-14 01:43:31.226 [INFO][3854] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:43:31.293734 containerd[1600]: 2026-01-14 01:43:31.226 [INFO][3854] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:43:31.293734 containerd[1600]: 2026-01-14 01:43:31.226 [INFO][3854] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-229' Jan 14 01:43:31.293734 containerd[1600]: 2026-01-14 01:43:31.237 [INFO][3854] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" host="172-239-193-229" Jan 14 01:43:31.293734 containerd[1600]: 2026-01-14 01:43:31.245 [INFO][3854] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-229" Jan 14 01:43:31.293734 containerd[1600]: 2026-01-14 01:43:31.248 [INFO][3854] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:31.293734 containerd[1600]: 2026-01-14 01:43:31.249 [INFO][3854] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:31.293734 containerd[1600]: 2026-01-14 01:43:31.251 [INFO][3854] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:31.294045 containerd[1600]: 2026-01-14 01:43:31.251 [INFO][3854] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" host="172-239-193-229" Jan 14 01:43:31.294045 containerd[1600]: 2026-01-14 01:43:31.252 [INFO][3854] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb Jan 14 01:43:31.294045 containerd[1600]: 2026-01-14 01:43:31.256 [INFO][3854] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" host="172-239-193-229" Jan 14 01:43:31.294045 containerd[1600]: 2026-01-14 01:43:31.260 [INFO][3854] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.68.193/26] block=192.168.68.192/26 handle="k8s-pod-network.3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" host="172-239-193-229" Jan 14 01:43:31.294045 containerd[1600]: 2026-01-14 01:43:31.260 [INFO][3854] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.193/26] handle="k8s-pod-network.3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" host="172-239-193-229" Jan 14 01:43:31.294045 containerd[1600]: 2026-01-14 01:43:31.260 [INFO][3854] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:43:31.294045 containerd[1600]: 2026-01-14 01:43:31.260 [INFO][3854] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.68.193/26] IPv6=[] ContainerID="3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" HandleID="k8s-pod-network.3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" Workload="172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0" Jan 14 01:43:31.294323 containerd[1600]: 2026-01-14 01:43:31.264 [INFO][3842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" Namespace="calico-system" Pod="whisker-79c4f8b6b9-9knmv" WorkloadEndpoint="172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0", GenerateName:"whisker-79c4f8b6b9-", Namespace:"calico-system", SelfLink:"", UID:"587711a7-ed5a-468c-b6b8-7056f146431a", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79c4f8b6b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"", Pod:"whisker-79c4f8b6b9-9knmv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.68.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic3e0813c412", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:31.294323 containerd[1600]: 2026-01-14 01:43:31.264 [INFO][3842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.193/32] ContainerID="3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" Namespace="calico-system" Pod="whisker-79c4f8b6b9-9knmv" WorkloadEndpoint="172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0" Jan 14 01:43:31.294445 containerd[1600]: 2026-01-14 01:43:31.264 [INFO][3842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3e0813c412 ContainerID="3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" Namespace="calico-system" Pod="whisker-79c4f8b6b9-9knmv" WorkloadEndpoint="172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0" Jan 14 01:43:31.294445 containerd[1600]: 2026-01-14 01:43:31.277 [INFO][3842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" Namespace="calico-system" Pod="whisker-79c4f8b6b9-9knmv" WorkloadEndpoint="172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0" Jan 14 01:43:31.294518 containerd[1600]: 2026-01-14 01:43:31.278 [INFO][3842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" Namespace="calico-system" Pod="whisker-79c4f8b6b9-9knmv" WorkloadEndpoint="172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0", GenerateName:"whisker-79c4f8b6b9-", Namespace:"calico-system", SelfLink:"", UID:"587711a7-ed5a-468c-b6b8-7056f146431a", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79c4f8b6b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb", Pod:"whisker-79c4f8b6b9-9knmv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.68.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic3e0813c412", MAC:"3a:b7:d9:7f:9b:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:31.294647 containerd[1600]: 2026-01-14 01:43:31.291 [INFO][3842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" Namespace="calico-system" Pod="whisker-79c4f8b6b9-9knmv" WorkloadEndpoint="172--239--193--229-k8s-whisker--79c4f8b6b9--9knmv-eth0" Jan 14 01:43:31.325077 kubelet[2803]: I0114 01:43:31.324965 2803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7be471a-cdad-47a9-a9b9-30003e7852fb" path="/var/lib/kubelet/pods/f7be471a-cdad-47a9-a9b9-30003e7852fb/volumes" Jan 14 01:43:31.333896 containerd[1600]: time="2026-01-14T01:43:31.333862867Z" level=info msg="connecting to shim 3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb" address="unix:///run/containerd/s/26f1b019c69ca6023d75096df09d2c0f461c87fc40cf9e9f20a7a71c22111707" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:43:31.362606 systemd[1]: Started cri-containerd-3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb.scope - libcontainer container 3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb. Jan 14 01:43:31.383498 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 14 01:43:31.383569 kernel: audit: type=1334 audit(1768355011.381:597): prog-id=187 op=LOAD Jan 14 01:43:31.381000 audit: BPF prog-id=187 op=LOAD Jan 14 01:43:31.387331 kernel: audit: type=1334 audit(1768355011.381:598): prog-id=188 op=LOAD Jan 14 01:43:31.381000 audit: BPF prog-id=188 op=LOAD Jan 14 01:43:31.381000 audit[3885]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:31.402351 kernel: audit: type=1300 audit(1768355011.381:598): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:31.402477 kernel: audit: type=1327 audit(1768355011.381:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335343463623832666631666461323265396437623232333564633263 Jan 14 01:43:31.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335343463623832666631666461323265396437623232333564633263 Jan 14 01:43:31.381000 audit: BPF prog-id=188 op=UNLOAD Jan 14 01:43:31.411797 kernel: audit: type=1334 audit(1768355011.381:599): prog-id=188 op=UNLOAD Jan 14 01:43:31.411838 kernel: audit: type=1300 audit(1768355011.381:599): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:31.381000 audit[3885]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:31.415453 kernel: audit: type=1327 audit(1768355011.381:599): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335343463623832666631666461323265396437623232333564633263 Jan 14 01:43:31.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335343463623832666631666461323265396437623232333564633263 Jan 14 01:43:31.381000 audit: BPF prog-id=189 op=LOAD Jan 14 01:43:31.429549 kernel: audit: type=1334 audit(1768355011.381:600): prog-id=189 op=LOAD Jan 14 01:43:31.429613 kernel: audit: type=1300 audit(1768355011.381:600): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:31.381000 audit[3885]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:31.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335343463623832666631666461323265396437623232333564633263 Jan 14 01:43:31.442062 containerd[1600]: time="2026-01-14T01:43:31.434532207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c4f8b6b9-9knmv,Uid:587711a7-ed5a-468c-b6b8-7056f146431a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3544cb82ff1fda22e9d7b2235dc2c72cf0e60301e0aa46c6da0b1afdc6ee5acb\"" Jan 14 01:43:31.442062 containerd[1600]: time="2026-01-14T01:43:31.435771696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:43:31.442465 kernel: audit: type=1327 audit(1768355011.381:600): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335343463623832666631666461323265396437623232333564633263 Jan 14 01:43:31.381000 audit: BPF prog-id=190 op=LOAD Jan 14 01:43:31.381000 audit[3885]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:31.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335343463623832666631666461323265396437623232333564633263 Jan 14 01:43:31.381000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:43:31.381000 audit[3885]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:31.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335343463623832666631666461323265396437623232333564633263 Jan 14 01:43:31.381000 audit: BPF prog-id=189 op=UNLOAD Jan 14 01:43:31.381000 audit[3885]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:31.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335343463623832666631666461323265396437623232333564633263 Jan 14 01:43:31.381000 audit: BPF prog-id=191 op=LOAD Jan 14 01:43:31.381000 audit[3885]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3874 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:31.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335343463623832666631666461323265396437623232333564633263 Jan 14 01:43:31.480480 kubelet[2803]: I0114 01:43:31.480459 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:43:31.480912 kubelet[2803]: E0114 01:43:31.480825 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:31.576086 containerd[1600]: time="2026-01-14T01:43:31.576012956Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:31.577207 containerd[1600]: time="2026-01-14T01:43:31.577041496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:43:31.577207 containerd[1600]: time="2026-01-14T01:43:31.577085915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:31.577391 kubelet[2803]: E0114 01:43:31.577310 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:43:31.577391 kubelet[2803]: E0114 01:43:31.577367 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:43:31.579711 kubelet[2803]: E0114 01:43:31.579649 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c32153cf5ee94e1085ad7bf9a7fbf30a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4dfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c4f8b6b9-9knmv_calico-system(587711a7-ed5a-468c-b6b8-7056f146431a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:31.582829 containerd[1600]: time="2026-01-14T01:43:31.582769513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:43:31.708737 containerd[1600]: time="2026-01-14T01:43:31.708677310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:31.709381 containerd[1600]: time="2026-01-14T01:43:31.709338229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:43:31.709537 containerd[1600]: time="2026-01-14T01:43:31.709427659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:31.709643 kubelet[2803]: E0114 01:43:31.709608 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:43:31.709869 kubelet[2803]: E0114 01:43:31.709656 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:43:31.709906 kubelet[2803]: E0114 01:43:31.709771 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4dfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c4f8b6b9-9knmv_calico-system(587711a7-ed5a-468c-b6b8-7056f146431a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:31.711279 kubelet[2803]: E0114 01:43:31.711231 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:43:32.484487 kubelet[2803]: E0114 01:43:32.484399 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:43:32.507000 audit[4005]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4005 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:32.507000 audit[4005]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe55ad04f0 a2=0 a3=7ffe55ad04dc items=0 ppid=2916 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:32.507000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:32.514000 audit[4005]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4005 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:32.514000 audit[4005]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe55ad04f0 a2=0 a3=0 items=0 ppid=2916 pid=4005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:32.514000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:33.296659 systemd-networkd[1502]: calic3e0813c412: Gained IPv6LL Jan 14 01:43:33.673377 kubelet[2803]: I0114 01:43:33.673324 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:43:33.674031 kubelet[2803]: E0114 01:43:33.673815 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:37.321854 kubelet[2803]: E0114 01:43:37.321822 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:37.325813 containerd[1600]: time="2026-01-14T01:43:37.324805441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n7n5w,Uid:d646678c-86d1-495a-97d3-cd193380cb78,Namespace:kube-system,Attempt:0,}" Jan 14 01:43:37.325813 containerd[1600]: time="2026-01-14T01:43:37.324838921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gg5g8,Uid:27494ae0-0ad7-4d62-b447-69c7f55fa588,Namespace:calico-system,Attempt:0,}" Jan 14 01:43:37.498903 systemd-networkd[1502]: cali0214217e6b6: Link UP Jan 14 01:43:37.501007 systemd-networkd[1502]: cali0214217e6b6: Gained carrier Jan 14 01:43:37.527002 containerd[1600]: 2026-01-14 01:43:37.381 [INFO][4152] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:43:37.527002 containerd[1600]: 2026-01-14 01:43:37.396 [INFO][4152] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0 coredns-674b8bbfcf- kube-system d646678c-86d1-495a-97d3-cd193380cb78 820 0 2026-01-14 01:43:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-193-229 coredns-674b8bbfcf-n7n5w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0214217e6b6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7n5w" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-" Jan 14 01:43:37.527002 containerd[1600]: 2026-01-14 01:43:37.396 [INFO][4152] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7n5w" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0" Jan 14 01:43:37.527002 containerd[1600]: 2026-01-14 01:43:37.437 [INFO][4180] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" HandleID="k8s-pod-network.a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" Workload="172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0" Jan 14 01:43:37.527213 containerd[1600]: 2026-01-14 01:43:37.437 [INFO][4180] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" HandleID="k8s-pod-network.a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" Workload="172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4f80), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-193-229", "pod":"coredns-674b8bbfcf-n7n5w", "timestamp":"2026-01-14 01:43:37.437707854 +0000 UTC"}, Hostname:"172-239-193-229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:43:37.527213 containerd[1600]: 2026-01-14 01:43:37.438 [INFO][4180] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:43:37.527213 containerd[1600]: 2026-01-14 01:43:37.438 [INFO][4180] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:43:37.527213 containerd[1600]: 2026-01-14 01:43:37.438 [INFO][4180] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-229' Jan 14 01:43:37.527213 containerd[1600]: 2026-01-14 01:43:37.446 [INFO][4180] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" host="172-239-193-229" Jan 14 01:43:37.527213 containerd[1600]: 2026-01-14 01:43:37.456 [INFO][4180] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-229" Jan 14 01:43:37.527213 containerd[1600]: 2026-01-14 01:43:37.463 [INFO][4180] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:37.527213 containerd[1600]: 2026-01-14 01:43:37.465 [INFO][4180] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:37.527213 containerd[1600]: 2026-01-14 01:43:37.468 [INFO][4180] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:37.527213 containerd[1600]: 2026-01-14 01:43:37.468 [INFO][4180] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" host="172-239-193-229" Jan 14 01:43:37.528592 containerd[1600]: 2026-01-14 01:43:37.470 [INFO][4180] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e Jan 14 01:43:37.528592 containerd[1600]: 2026-01-14 01:43:37.476 [INFO][4180] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" host="172-239-193-229" Jan 14 01:43:37.528592 containerd[1600]: 2026-01-14 01:43:37.481 [INFO][4180] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.68.194/26] block=192.168.68.192/26 handle="k8s-pod-network.a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" host="172-239-193-229" Jan 14 01:43:37.528592 containerd[1600]: 2026-01-14 01:43:37.482 [INFO][4180] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.194/26] handle="k8s-pod-network.a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" host="172-239-193-229" Jan 14 01:43:37.528592 containerd[1600]: 2026-01-14 01:43:37.483 [INFO][4180] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:43:37.528592 containerd[1600]: 2026-01-14 01:43:37.483 [INFO][4180] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.68.194/26] IPv6=[] ContainerID="a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" HandleID="k8s-pod-network.a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" Workload="172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0" Jan 14 01:43:37.528878 containerd[1600]: 2026-01-14 01:43:37.485 [INFO][4152] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7n5w" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d646678c-86d1-495a-97d3-cd193380cb78", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"", Pod:"coredns-674b8bbfcf-n7n5w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0214217e6b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:37.528878 containerd[1600]: 2026-01-14 01:43:37.485 [INFO][4152] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.194/32] ContainerID="a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7n5w" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0" Jan 14 01:43:37.528878 containerd[1600]: 2026-01-14 01:43:37.485 [INFO][4152] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0214217e6b6 ContainerID="a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7n5w" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0" Jan 14 01:43:37.528878 containerd[1600]: 2026-01-14 01:43:37.504 [INFO][4152] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7n5w" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0" Jan 14 01:43:37.528878 containerd[1600]: 2026-01-14 01:43:37.505 [INFO][4152] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7n5w" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d646678c-86d1-495a-97d3-cd193380cb78", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e", Pod:"coredns-674b8bbfcf-n7n5w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0214217e6b6", MAC:"ca:a6:ac:55:c9:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:37.528878 containerd[1600]: 2026-01-14 01:43:37.521 [INFO][4152] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7n5w" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--n7n5w-eth0" Jan 14 01:43:37.548367 containerd[1600]: time="2026-01-14T01:43:37.548270679Z" level=info msg="connecting to shim a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e" address="unix:///run/containerd/s/e7fd64dd9c28e8368dcbdafa74e944fc9a5052b78b1b63dce8e0c769762d0369" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:43:37.596867 systemd[1]: Started cri-containerd-a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e.scope - libcontainer container a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e. Jan 14 01:43:37.602559 systemd-networkd[1502]: cali6cb99e5899c: Link UP Jan 14 01:43:37.602917 systemd-networkd[1502]: cali6cb99e5899c: Gained carrier Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.395 [INFO][4156] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.417 [INFO][4156] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--229-k8s-csi--node--driver--gg5g8-eth0 csi-node-driver- calico-system 27494ae0-0ad7-4d62-b447-69c7f55fa588 726 0 2026-01-14 01:43:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-193-229 csi-node-driver-gg5g8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6cb99e5899c [] [] }} ContainerID="0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" Namespace="calico-system" Pod="csi-node-driver-gg5g8" WorkloadEndpoint="172--239--193--229-k8s-csi--node--driver--gg5g8-" Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.417 [INFO][4156] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" Namespace="calico-system" Pod="csi-node-driver-gg5g8" WorkloadEndpoint="172--239--193--229-k8s-csi--node--driver--gg5g8-eth0" Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.471 [INFO][4186] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" HandleID="k8s-pod-network.0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" Workload="172--239--193--229-k8s-csi--node--driver--gg5g8-eth0" Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.471 [INFO][4186] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" HandleID="k8s-pod-network.0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" Workload="172--239--193--229-k8s-csi--node--driver--gg5g8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-229", "pod":"csi-node-driver-gg5g8", "timestamp":"2026-01-14 01:43:37.471673617 +0000 UTC"}, Hostname:"172-239-193-229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.472 [INFO][4186] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.483 [INFO][4186] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.483 [INFO][4186] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-229' Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.547 [INFO][4186] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" host="172-239-193-229" Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.556 [INFO][4186] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-229" Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.568 [INFO][4186] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.571 [INFO][4186] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.575 [INFO][4186] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.575 [INFO][4186] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" host="172-239-193-229" Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.578 [INFO][4186] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3 Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.584 [INFO][4186] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" host="172-239-193-229" Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.589 [INFO][4186] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.68.195/26] block=192.168.68.192/26 handle="k8s-pod-network.0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" host="172-239-193-229" Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.589 [INFO][4186] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.195/26] handle="k8s-pod-network.0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" host="172-239-193-229" Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.589 [INFO][4186] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:43:37.621613 containerd[1600]: 2026-01-14 01:43:37.589 [INFO][4186] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.68.195/26] IPv6=[] ContainerID="0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" HandleID="k8s-pod-network.0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" Workload="172--239--193--229-k8s-csi--node--driver--gg5g8-eth0" Jan 14 01:43:37.623274 containerd[1600]: 2026-01-14 01:43:37.595 [INFO][4156] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" Namespace="calico-system" Pod="csi-node-driver-gg5g8" WorkloadEndpoint="172--239--193--229-k8s-csi--node--driver--gg5g8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-csi--node--driver--gg5g8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27494ae0-0ad7-4d62-b447-69c7f55fa588", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"", Pod:"csi-node-driver-gg5g8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.68.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6cb99e5899c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:37.623274 containerd[1600]: 2026-01-14 01:43:37.595 [INFO][4156] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.195/32] ContainerID="0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" Namespace="calico-system" Pod="csi-node-driver-gg5g8" WorkloadEndpoint="172--239--193--229-k8s-csi--node--driver--gg5g8-eth0" Jan 14 01:43:37.623274 containerd[1600]: 2026-01-14 01:43:37.595 [INFO][4156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cb99e5899c ContainerID="0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" Namespace="calico-system" Pod="csi-node-driver-gg5g8" WorkloadEndpoint="172--239--193--229-k8s-csi--node--driver--gg5g8-eth0" Jan 14 01:43:37.623274 containerd[1600]: 2026-01-14 01:43:37.604 [INFO][4156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" Namespace="calico-system" Pod="csi-node-driver-gg5g8" WorkloadEndpoint="172--239--193--229-k8s-csi--node--driver--gg5g8-eth0" Jan 14 01:43:37.623274 containerd[1600]: 2026-01-14 01:43:37.604 [INFO][4156] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" Namespace="calico-system" Pod="csi-node-driver-gg5g8" WorkloadEndpoint="172--239--193--229-k8s-csi--node--driver--gg5g8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-csi--node--driver--gg5g8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27494ae0-0ad7-4d62-b447-69c7f55fa588", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3", Pod:"csi-node-driver-gg5g8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.68.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6cb99e5899c", MAC:"2a:5e:06:49:41:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:37.623274 containerd[1600]: 2026-01-14 01:43:37.617 [INFO][4156] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" Namespace="calico-system" Pod="csi-node-driver-gg5g8" WorkloadEndpoint="172--239--193--229-k8s-csi--node--driver--gg5g8-eth0" Jan 14 01:43:37.630489 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 14 01:43:37.630603 kernel: audit: type=1334 audit(1768355017.626:607): prog-id=192 op=LOAD Jan 14 01:43:37.626000 audit: BPF prog-id=192 op=LOAD Jan 14 01:43:37.627000 audit: BPF prog-id=193 op=LOAD Jan 14 01:43:37.633906 kernel: audit: type=1334 audit(1768355017.627:608): prog-id=193 op=LOAD Jan 14 01:43:37.636021 kernel: audit: type=1300 audit(1768355017.627:608): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.627000 audit[4226]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656132313263376566653131663738663334383962386363646161 Jan 14 01:43:37.627000 audit: BPF prog-id=193 op=UNLOAD Jan 14 01:43:37.652488 kernel: audit: type=1327 audit(1768355017.627:608): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656132313263376566653131663738663334383962386363646161 Jan 14 01:43:37.652533 kernel: audit: type=1334 audit(1768355017.627:609): prog-id=193 op=UNLOAD Jan 14 01:43:37.659503 kernel: audit: type=1300 audit(1768355017.627:609): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.627000 audit[4226]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.670767 kernel: audit: type=1327 audit(1768355017.627:609): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656132313263376566653131663738663334383962386363646161 Jan 14 01:43:37.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656132313263376566653131663738663334383962386363646161 Jan 14 01:43:37.627000 audit: BPF prog-id=194 op=LOAD Jan 14 01:43:37.685900 kernel: audit: type=1334 audit(1768355017.627:610): prog-id=194 op=LOAD Jan 14 01:43:37.685933 kernel: audit: type=1300 audit(1768355017.627:610): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.627000 audit[4226]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656132313263376566653131663738663334383962386363646161 Jan 14 01:43:37.690564 containerd[1600]: time="2026-01-14T01:43:37.687976769Z" level=info msg="connecting to shim 0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3" address="unix:///run/containerd/s/dbabc92a8f23b5137bbd0d65545b1f4d2b26c94b6cdfbb006777765a4c34ce01" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:43:37.627000 audit: BPF prog-id=195 op=LOAD Jan 14 01:43:37.627000 audit[4226]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.695444 kernel: audit: type=1327 audit(1768355017.627:610): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656132313263376566653131663738663334383962386363646161 Jan 14 01:43:37.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656132313263376566653131663738663334383962386363646161 Jan 14 01:43:37.627000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:43:37.627000 audit[4226]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656132313263376566653131663738663334383962386363646161 Jan 14 01:43:37.627000 audit: BPF prog-id=194 op=UNLOAD Jan 14 01:43:37.627000 audit[4226]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656132313263376566653131663738663334383962386363646161 Jan 14 01:43:37.627000 audit: BPF prog-id=196 op=LOAD Jan 14 01:43:37.627000 audit[4226]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4215 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132656132313263376566653131663738663334383962386363646161 Jan 14 01:43:37.702478 containerd[1600]: time="2026-01-14T01:43:37.702324762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n7n5w,Uid:d646678c-86d1-495a-97d3-cd193380cb78,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e\"" Jan 14 01:43:37.703521 kubelet[2803]: E0114 01:43:37.703500 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:37.708209 containerd[1600]: time="2026-01-14T01:43:37.708024929Z" level=info msg="CreateContainer within sandbox \"a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:43:37.721810 containerd[1600]: time="2026-01-14T01:43:37.721789852Z" level=info msg="Container 96f662147998dc326b2178dbfdf88ffa6aa15a28340f15da2efdd4ba70452281: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:43:37.724585 systemd[1]: Started cri-containerd-0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3.scope - libcontainer container 0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3. Jan 14 01:43:37.727660 containerd[1600]: time="2026-01-14T01:43:37.726994600Z" level=info msg="CreateContainer within sandbox \"a2ea212c7efe11f78f3489b8ccdaabf3efad1b99e5b2cc1dbfd5ea705d6e5a4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96f662147998dc326b2178dbfdf88ffa6aa15a28340f15da2efdd4ba70452281\"" Jan 14 01:43:37.729636 containerd[1600]: time="2026-01-14T01:43:37.729617988Z" level=info msg="StartContainer for \"96f662147998dc326b2178dbfdf88ffa6aa15a28340f15da2efdd4ba70452281\"" Jan 14 01:43:37.733309 containerd[1600]: time="2026-01-14T01:43:37.732468617Z" level=info msg="connecting to shim 96f662147998dc326b2178dbfdf88ffa6aa15a28340f15da2efdd4ba70452281" address="unix:///run/containerd/s/e7fd64dd9c28e8368dcbdafa74e944fc9a5052b78b1b63dce8e0c769762d0369" protocol=ttrpc version=3 Jan 14 01:43:37.745000 audit: BPF prog-id=197 op=LOAD Jan 14 01:43:37.745000 audit: BPF prog-id=198 op=LOAD Jan 14 01:43:37.745000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4259 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663932353866653338633439666438353439616264363434363165 Jan 14 01:43:37.749000 audit: BPF prog-id=198 op=UNLOAD Jan 14 01:43:37.749000 audit[4275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4259 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663932353866653338633439666438353439616264363434363165 Jan 14 01:43:37.750000 audit: BPF prog-id=199 op=LOAD Jan 14 01:43:37.750000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4259 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663932353866653338633439666438353439616264363434363165 Jan 14 01:43:37.750000 audit: BPF prog-id=200 op=LOAD Jan 14 01:43:37.750000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4259 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663932353866653338633439666438353439616264363434363165 Jan 14 01:43:37.750000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:43:37.750000 audit[4275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4259 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663932353866653338633439666438353439616264363434363165 Jan 14 01:43:37.751000 audit: BPF prog-id=199 op=UNLOAD Jan 14 01:43:37.751000 audit[4275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4259 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663932353866653338633439666438353439616264363434363165 Jan 14 01:43:37.751000 audit: BPF prog-id=201 op=LOAD Jan 14 01:43:37.751000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4259 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663932353866653338633439666438353439616264363434363165 Jan 14 01:43:37.757625 systemd[1]: Started cri-containerd-96f662147998dc326b2178dbfdf88ffa6aa15a28340f15da2efdd4ba70452281.scope - libcontainer container 96f662147998dc326b2178dbfdf88ffa6aa15a28340f15da2efdd4ba70452281. Jan 14 01:43:37.774170 containerd[1600]: time="2026-01-14T01:43:37.774142126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gg5g8,Uid:27494ae0-0ad7-4d62-b447-69c7f55fa588,Namespace:calico-system,Attempt:0,} returns sandbox id \"0cf9258fe38c49fd8549abd64461e81acae4dbd62b2f7f56e3ea2f63f0a4bba3\"" Jan 14 01:43:37.776943 containerd[1600]: time="2026-01-14T01:43:37.776919605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:43:37.784000 audit: BPF prog-id=202 op=LOAD Jan 14 01:43:37.784000 audit: BPF prog-id=203 op=LOAD Jan 14 01:43:37.784000 audit[4296]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4215 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936663636323134373939386463333236623231373864626664663838 Jan 14 01:43:37.784000 audit: BPF prog-id=203 op=UNLOAD Jan 14 01:43:37.784000 audit[4296]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4215 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936663636323134373939386463333236623231373864626664663838 Jan 14 01:43:37.784000 audit: BPF prog-id=204 op=LOAD Jan 14 01:43:37.784000 audit[4296]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4215 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936663636323134373939386463333236623231373864626664663838 Jan 14 01:43:37.785000 audit: BPF prog-id=205 op=LOAD Jan 14 01:43:37.785000 audit[4296]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4215 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936663636323134373939386463333236623231373864626664663838 Jan 14 01:43:37.785000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:43:37.785000 audit[4296]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4215 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936663636323134373939386463333236623231373864626664663838 Jan 14 01:43:37.785000 audit: BPF prog-id=204 op=UNLOAD Jan 14 01:43:37.785000 audit[4296]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4215 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936663636323134373939386463333236623231373864626664663838 Jan 14 01:43:37.785000 audit: BPF prog-id=206 op=LOAD Jan 14 01:43:37.785000 audit[4296]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4215 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:37.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936663636323134373939386463333236623231373864626664663838 Jan 14 01:43:37.805403 containerd[1600]: time="2026-01-14T01:43:37.805326561Z" level=info msg="StartContainer for \"96f662147998dc326b2178dbfdf88ffa6aa15a28340f15da2efdd4ba70452281\" returns successfully" Jan 14 01:43:37.907614 containerd[1600]: time="2026-01-14T01:43:37.906047040Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:37.907614 containerd[1600]: time="2026-01-14T01:43:37.907189220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:43:37.907614 containerd[1600]: time="2026-01-14T01:43:37.907262060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:37.908025 kubelet[2803]: E0114 01:43:37.907982 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:43:37.908196 kubelet[2803]: E0114 01:43:37.908031 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:43:37.909757 kubelet[2803]: E0114 01:43:37.909711 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5hz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gg5g8_calico-system(27494ae0-0ad7-4d62-b447-69c7f55fa588): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:37.912201 containerd[1600]: time="2026-01-14T01:43:37.912180657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:43:38.037437 containerd[1600]: time="2026-01-14T01:43:38.037372375Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:38.038437 containerd[1600]: time="2026-01-14T01:43:38.038381164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:43:38.038598 containerd[1600]: time="2026-01-14T01:43:38.038479544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:38.038757 kubelet[2803]: E0114 01:43:38.038703 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:43:38.038757 kubelet[2803]: E0114 01:43:38.038754 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:43:38.038911 kubelet[2803]: E0114 01:43:38.038863 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5hz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gg5g8_calico-system(27494ae0-0ad7-4d62-b447-69c7f55fa588): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:38.040158 kubelet[2803]: E0114 01:43:38.040107 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:43:38.320565 containerd[1600]: time="2026-01-14T01:43:38.320308033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b466d74c-r9454,Uid:467c90a2-bf12-4a6d-a6a3-0bb4155d4e42,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:43:38.320838 containerd[1600]: time="2026-01-14T01:43:38.320817853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-l58pb,Uid:79093d5d-07cf-4a25-a816-7eeb844e241f,Namespace:calico-system,Attempt:0,}" Jan 14 01:43:38.454576 systemd-networkd[1502]: cali8f898ba669b: Link UP Jan 14 01:43:38.456624 systemd-networkd[1502]: cali8f898ba669b: Gained carrier Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.374 [INFO][4341] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.387 [INFO][4341] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0 calico-apiserver-8b466d74c- calico-apiserver 467c90a2-bf12-4a6d-a6a3-0bb4155d4e42 823 0 2026-01-14 01:43:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8b466d74c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-193-229 calico-apiserver-8b466d74c-r9454 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8f898ba669b [] [] }} ContainerID="fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-r9454" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-" Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.388 [INFO][4341] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-r9454" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0" Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.417 [INFO][4365] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" HandleID="k8s-pod-network.fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" Workload="172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0" Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.417 [INFO][4365] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" HandleID="k8s-pod-network.fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" Workload="172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-193-229", "pod":"calico-apiserver-8b466d74c-r9454", "timestamp":"2026-01-14 01:43:38.417732644 +0000 UTC"}, Hostname:"172-239-193-229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.417 [INFO][4365] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.417 [INFO][4365] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.417 [INFO][4365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-229' Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.425 [INFO][4365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" host="172-239-193-229" Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.429 [INFO][4365] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-229" Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.432 [INFO][4365] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.434 [INFO][4365] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.435 [INFO][4365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.435 [INFO][4365] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" host="172-239-193-229" Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.436 [INFO][4365] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.440 [INFO][4365] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" host="172-239-193-229" Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.444 [INFO][4365] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.68.196/26] block=192.168.68.192/26 handle="k8s-pod-network.fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" host="172-239-193-229" Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.444 [INFO][4365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.196/26] handle="k8s-pod-network.fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" host="172-239-193-229" Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.444 [INFO][4365] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:43:38.469463 containerd[1600]: 2026-01-14 01:43:38.444 [INFO][4365] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.68.196/26] IPv6=[] ContainerID="fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" HandleID="k8s-pod-network.fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" Workload="172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0" Jan 14 01:43:38.472070 containerd[1600]: 2026-01-14 01:43:38.448 [INFO][4341] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-r9454" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0", GenerateName:"calico-apiserver-8b466d74c-", Namespace:"calico-apiserver", SelfLink:"", UID:"467c90a2-bf12-4a6d-a6a3-0bb4155d4e42", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b466d74c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"", Pod:"calico-apiserver-8b466d74c-r9454", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f898ba669b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:38.472070 containerd[1600]: 2026-01-14 01:43:38.448 [INFO][4341] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.196/32] ContainerID="fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-r9454" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0" Jan 14 01:43:38.472070 containerd[1600]: 2026-01-14 01:43:38.448 [INFO][4341] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f898ba669b ContainerID="fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-r9454" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0" Jan 14 01:43:38.472070 containerd[1600]: 2026-01-14 01:43:38.457 [INFO][4341] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-r9454" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0" Jan 14 01:43:38.472070 containerd[1600]: 2026-01-14 01:43:38.457 [INFO][4341] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-r9454" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0", GenerateName:"calico-apiserver-8b466d74c-", Namespace:"calico-apiserver", SelfLink:"", UID:"467c90a2-bf12-4a6d-a6a3-0bb4155d4e42", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b466d74c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce", Pod:"calico-apiserver-8b466d74c-r9454", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f898ba669b", MAC:"d2:4a:3b:ee:0b:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:38.472070 containerd[1600]: 2026-01-14 01:43:38.464 [INFO][4341] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-r9454" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--r9454-eth0" Jan 14 01:43:38.497830 containerd[1600]: time="2026-01-14T01:43:38.497793514Z" level=info msg="connecting to shim fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce" address="unix:///run/containerd/s/e490b5d77fcbe12823ff38740f5ed71b996910598eb5b904d93e66289c95fa6c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:43:38.506850 kubelet[2803]: E0114 01:43:38.506515 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:38.511182 kubelet[2803]: E0114 01:43:38.511154 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:43:38.552441 kubelet[2803]: I0114 01:43:38.550968 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n7n5w" podStartSLOduration=35.550952748 podStartE2EDuration="35.550952748s" podCreationTimestamp="2026-01-14 01:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:43:38.548597979 +0000 UTC m=+39.341472296" watchObservedRunningTime="2026-01-14 01:43:38.550952748 +0000 UTC m=+39.343827055" Jan 14 01:43:38.568615 systemd[1]: Started cri-containerd-fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce.scope - libcontainer container fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce. Jan 14 01:43:38.585000 audit[4429]: NETFILTER_CFG table=filter:119 family=2 entries=19 op=nft_register_rule pid=4429 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:38.585000 audit[4429]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc37daef30 a2=0 a3=7ffc37daef1c items=0 ppid=2916 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:38.593000 audit[4429]: NETFILTER_CFG table=nat:120 family=2 entries=33 op=nft_register_chain pid=4429 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:38.593000 audit[4429]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffc37daef30 a2=0 a3=7ffc37daef1c items=0 ppid=2916 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.593000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:38.617637 systemd-networkd[1502]: cali57b31e1bda6: Link UP Jan 14 01:43:38.619894 systemd-networkd[1502]: cali57b31e1bda6: Gained carrier Jan 14 01:43:38.630000 audit: BPF prog-id=207 op=LOAD Jan 14 01:43:38.631000 audit: BPF prog-id=208 op=LOAD Jan 14 01:43:38.631000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4397 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665306161653431616464626262346232313137653832346430643166 Jan 14 01:43:38.631000 audit: BPF prog-id=208 op=UNLOAD Jan 14 01:43:38.631000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4397 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665306161653431616464626262346232313137653832346430643166 Jan 14 01:43:38.631000 audit: BPF prog-id=209 op=LOAD Jan 14 01:43:38.631000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4397 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665306161653431616464626262346232313137653832346430643166 Jan 14 01:43:38.631000 audit: BPF prog-id=210 op=LOAD Jan 14 01:43:38.631000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4397 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665306161653431616464626262346232313137653832346430643166 Jan 14 01:43:38.632000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:43:38.632000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4397 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665306161653431616464626262346232313137653832346430643166 Jan 14 01:43:38.632000 audit: BPF prog-id=209 op=UNLOAD Jan 14 01:43:38.632000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4397 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665306161653431616464626262346232313137653832346430643166 Jan 14 01:43:38.632000 audit: BPF prog-id=211 op=LOAD Jan 14 01:43:38.632000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4397 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665306161653431616464626262346232313137653832346430643166 Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.380 [INFO][4348] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.395 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0 goldmane-666569f655- calico-system 79093d5d-07cf-4a25-a816-7eeb844e241f 830 0 2026-01-14 01:43:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-193-229 goldmane-666569f655-l58pb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali57b31e1bda6 [] [] }} ContainerID="d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" Namespace="calico-system" Pod="goldmane-666569f655-l58pb" WorkloadEndpoint="172--239--193--229-k8s-goldmane--666569f655--l58pb-" Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.395 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" Namespace="calico-system" Pod="goldmane-666569f655-l58pb" WorkloadEndpoint="172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0" Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.424 [INFO][4370] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" HandleID="k8s-pod-network.d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" Workload="172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0" Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.425 [INFO][4370] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" HandleID="k8s-pod-network.d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" Workload="172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-229", "pod":"goldmane-666569f655-l58pb", "timestamp":"2026-01-14 01:43:38.424642101 +0000 UTC"}, Hostname:"172-239-193-229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.425 [INFO][4370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.444 [INFO][4370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.445 [INFO][4370] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-229' Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.531 [INFO][4370] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" host="172-239-193-229" Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.554 [INFO][4370] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-229" Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.583 [INFO][4370] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.586 [INFO][4370] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.588 [INFO][4370] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.589 [INFO][4370] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" host="172-239-193-229" Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.591 [INFO][4370] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.597 [INFO][4370] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" host="172-239-193-229" Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.603 [INFO][4370] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.68.197/26] block=192.168.68.192/26 handle="k8s-pod-network.d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" host="172-239-193-229" Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.603 [INFO][4370] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.197/26] handle="k8s-pod-network.d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" host="172-239-193-229" Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.603 [INFO][4370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:43:38.646919 containerd[1600]: 2026-01-14 01:43:38.603 [INFO][4370] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.68.197/26] IPv6=[] ContainerID="d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" HandleID="k8s-pod-network.d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" Workload="172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0" Jan 14 01:43:38.647884 containerd[1600]: 2026-01-14 01:43:38.611 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" Namespace="calico-system" Pod="goldmane-666569f655-l58pb" WorkloadEndpoint="172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"79093d5d-07cf-4a25-a816-7eeb844e241f", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"", Pod:"goldmane-666569f655-l58pb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.68.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali57b31e1bda6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:38.647884 containerd[1600]: 2026-01-14 01:43:38.611 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.197/32] ContainerID="d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" Namespace="calico-system" Pod="goldmane-666569f655-l58pb" WorkloadEndpoint="172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0" Jan 14 01:43:38.647884 containerd[1600]: 2026-01-14 01:43:38.611 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57b31e1bda6 ContainerID="d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" Namespace="calico-system" Pod="goldmane-666569f655-l58pb" WorkloadEndpoint="172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0" Jan 14 01:43:38.647884 containerd[1600]: 2026-01-14 01:43:38.625 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" Namespace="calico-system" Pod="goldmane-666569f655-l58pb" WorkloadEndpoint="172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0" Jan 14 01:43:38.647884 containerd[1600]: 2026-01-14 01:43:38.627 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" Namespace="calico-system" Pod="goldmane-666569f655-l58pb" WorkloadEndpoint="172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"79093d5d-07cf-4a25-a816-7eeb844e241f", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d", Pod:"goldmane-666569f655-l58pb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.68.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali57b31e1bda6", MAC:"e6:68:1f:05:d8:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:38.647884 containerd[1600]: 2026-01-14 01:43:38.642 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" Namespace="calico-system" Pod="goldmane-666569f655-l58pb" WorkloadEndpoint="172--239--193--229-k8s-goldmane--666569f655--l58pb-eth0" Jan 14 01:43:38.673534 systemd-networkd[1502]: cali6cb99e5899c: Gained IPv6LL Jan 14 01:43:38.685345 containerd[1600]: time="2026-01-14T01:43:38.685304331Z" level=info msg="connecting to shim d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d" address="unix:///run/containerd/s/44e556d928a2bf73a431acde5305a7605aa1d0728862ca915aa3e03ebd848b78" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:43:38.732638 systemd[1]: Started cri-containerd-d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d.scope - libcontainer container d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d. Jan 14 01:43:38.738723 containerd[1600]: time="2026-01-14T01:43:38.738667754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b466d74c-r9454,Uid:467c90a2-bf12-4a6d-a6a3-0bb4155d4e42,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fe0aae41addbbb4b2117e824d0d1fd61094e6cfc5b3d1b13856ec89269ca15ce\"" Jan 14 01:43:38.741741 containerd[1600]: time="2026-01-14T01:43:38.741659252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:43:38.755000 audit: BPF prog-id=212 op=LOAD Jan 14 01:43:38.756000 audit: BPF prog-id=213 op=LOAD Jan 14 01:43:38.756000 audit[4461]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4449 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386437346364653135326665633762356535363337666537613964 Jan 14 01:43:38.756000 audit: BPF prog-id=213 op=UNLOAD Jan 14 01:43:38.756000 audit[4461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4449 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386437346364653135326665633762356535363337666537613964 Jan 14 01:43:38.757000 audit: BPF prog-id=214 op=LOAD Jan 14 01:43:38.757000 audit[4461]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4449 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386437346364653135326665633762356535363337666537613964 Jan 14 01:43:38.757000 audit: BPF prog-id=215 op=LOAD Jan 14 01:43:38.757000 audit[4461]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4449 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386437346364653135326665633762356535363337666537613964 Jan 14 01:43:38.757000 audit: BPF prog-id=215 op=UNLOAD Jan 14 01:43:38.757000 audit[4461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4449 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386437346364653135326665633762356535363337666537613964 Jan 14 01:43:38.758000 audit: BPF prog-id=214 op=UNLOAD Jan 14 01:43:38.758000 audit[4461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4449 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386437346364653135326665633762356535363337666537613964 Jan 14 01:43:38.758000 audit: BPF prog-id=216 op=LOAD Jan 14 01:43:38.758000 audit[4461]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4449 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:38.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432386437346364653135326665633762356535363337666537613964 Jan 14 01:43:38.802687 containerd[1600]: time="2026-01-14T01:43:38.802639542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-l58pb,Uid:79093d5d-07cf-4a25-a816-7eeb844e241f,Namespace:calico-system,Attempt:0,} returns sandbox id \"d28d74cde152fec7b5e5637fe7a9d1370442c6d1a737a91a3a276008c5c9939d\"" Jan 14 01:43:38.867335 containerd[1600]: time="2026-01-14T01:43:38.867242370Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:38.869349 containerd[1600]: time="2026-01-14T01:43:38.869310479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:43:38.869444 containerd[1600]: time="2026-01-14T01:43:38.869366878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:38.869529 kubelet[2803]: E0114 01:43:38.869492 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:43:38.869529 kubelet[2803]: E0114 01:43:38.869524 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:43:38.869776 kubelet[2803]: E0114 01:43:38.869740 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vg8lt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b466d74c-r9454_calico-apiserver(467c90a2-bf12-4a6d-a6a3-0bb4155d4e42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:38.870381 containerd[1600]: time="2026-01-14T01:43:38.870356368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:43:38.870994 kubelet[2803]: E0114 01:43:38.870953 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:43:38.997616 containerd[1600]: time="2026-01-14T01:43:38.997477494Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:38.998304 containerd[1600]: time="2026-01-14T01:43:38.998246384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:43:38.998384 containerd[1600]: time="2026-01-14T01:43:38.998279084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:38.998541 kubelet[2803]: E0114 01:43:38.998490 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:43:38.998590 kubelet[2803]: E0114 01:43:38.998547 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:43:38.998704 kubelet[2803]: E0114 01:43:38.998658 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pglf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-l58pb_calico-system(79093d5d-07cf-4a25-a816-7eeb844e241f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:39.000140 kubelet[2803]: E0114 01:43:38.999989 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:43:39.120734 systemd-networkd[1502]: cali0214217e6b6: Gained IPv6LL Jan 14 01:43:39.322029 kubelet[2803]: E0114 01:43:39.321592 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:39.322805 containerd[1600]: time="2026-01-14T01:43:39.322753802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rxwz,Uid:d97b54b4-c39b-4d54-a5a1-73190acb9e98,Namespace:kube-system,Attempt:0,}" Jan 14 01:43:39.431281 systemd-networkd[1502]: calidfe939ad947: Link UP Jan 14 01:43:39.433046 systemd-networkd[1502]: calidfe939ad947: Gained carrier Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.352 [INFO][4506] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.365 [INFO][4506] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0 coredns-674b8bbfcf- kube-system d97b54b4-c39b-4d54-a5a1-73190acb9e98 829 0 2026-01-14 01:43:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-193-229 coredns-674b8bbfcf-4rxwz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidfe939ad947 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxwz" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-" Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.365 [INFO][4506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxwz" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0" Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.386 [INFO][4518] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" HandleID="k8s-pod-network.a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" Workload="172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0" Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.387 [INFO][4518] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" HandleID="k8s-pod-network.a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" Workload="172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-193-229", "pod":"coredns-674b8bbfcf-4rxwz", "timestamp":"2026-01-14 01:43:39.38697156 +0000 UTC"}, Hostname:"172-239-193-229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.387 [INFO][4518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.387 [INFO][4518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.387 [INFO][4518] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-229' Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.399 [INFO][4518] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" host="172-239-193-229" Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.406 [INFO][4518] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-229" Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.410 [INFO][4518] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.411 [INFO][4518] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.413 [INFO][4518] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.414 [INFO][4518] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" host="172-239-193-229" Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.415 [INFO][4518] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.420 [INFO][4518] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" host="172-239-193-229" Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.425 [INFO][4518] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.68.198/26] block=192.168.68.192/26 handle="k8s-pod-network.a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" host="172-239-193-229" Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.425 [INFO][4518] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.198/26] handle="k8s-pod-network.a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" host="172-239-193-229" Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.426 [INFO][4518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:43:39.445477 containerd[1600]: 2026-01-14 01:43:39.426 [INFO][4518] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.68.198/26] IPv6=[] ContainerID="a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" HandleID="k8s-pod-network.a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" Workload="172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0" Jan 14 01:43:39.446025 containerd[1600]: 2026-01-14 01:43:39.428 [INFO][4506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxwz" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d97b54b4-c39b-4d54-a5a1-73190acb9e98", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"", Pod:"coredns-674b8bbfcf-4rxwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidfe939ad947", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:39.446025 containerd[1600]: 2026-01-14 01:43:39.428 [INFO][4506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.198/32] ContainerID="a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxwz" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0" Jan 14 01:43:39.446025 containerd[1600]: 2026-01-14 01:43:39.428 [INFO][4506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfe939ad947 ContainerID="a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxwz" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0" Jan 14 01:43:39.446025 containerd[1600]: 2026-01-14 01:43:39.432 [INFO][4506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxwz" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0" Jan 14 01:43:39.446025 containerd[1600]: 2026-01-14 01:43:39.433 [INFO][4506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxwz" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d97b54b4-c39b-4d54-a5a1-73190acb9e98", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f", Pod:"coredns-674b8bbfcf-4rxwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidfe939ad947", MAC:"12:a8:6e:0c:39:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:39.446025 containerd[1600]: 2026-01-14 01:43:39.441 [INFO][4506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rxwz" WorkloadEndpoint="172--239--193--229-k8s-coredns--674b8bbfcf--4rxwz-eth0" Jan 14 01:43:39.467038 containerd[1600]: time="2026-01-14T01:43:39.466929910Z" level=info msg="connecting to shim a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f" address="unix:///run/containerd/s/32da54e43b9baf1c440e7280648d3a65c3188f3e7d2c6423a8afd3503ca93cb9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:43:39.510759 systemd[1]: Started cri-containerd-a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f.scope - libcontainer container a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f. Jan 14 01:43:39.516563 kubelet[2803]: E0114 01:43:39.516389 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:39.518045 kubelet[2803]: E0114 01:43:39.516590 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:43:39.518503 kubelet[2803]: E0114 01:43:39.518480 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:43:39.518872 kubelet[2803]: E0114 01:43:39.518846 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:43:39.541000 audit: BPF prog-id=217 op=LOAD Jan 14 01:43:39.542000 audit: BPF prog-id=218 op=LOAD Jan 14 01:43:39.542000 audit[4558]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4545 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130326163656137346132653834333961383735336238636437353462 Jan 14 01:43:39.542000 audit: BPF prog-id=218 op=UNLOAD Jan 14 01:43:39.542000 audit[4558]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4545 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130326163656137346132653834333961383735336238636437353462 Jan 14 01:43:39.542000 audit: BPF prog-id=219 op=LOAD Jan 14 01:43:39.542000 audit[4558]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4545 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130326163656137346132653834333961383735336238636437353462 Jan 14 01:43:39.542000 audit: BPF prog-id=220 op=LOAD Jan 14 01:43:39.542000 audit[4558]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4545 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130326163656137346132653834333961383735336238636437353462 Jan 14 01:43:39.542000 audit: BPF prog-id=220 op=UNLOAD Jan 14 01:43:39.542000 audit[4558]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4545 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130326163656137346132653834333961383735336238636437353462 Jan 14 01:43:39.542000 audit: BPF prog-id=219 op=UNLOAD Jan 14 01:43:39.542000 audit[4558]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4545 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130326163656137346132653834333961383735336238636437353462 Jan 14 01:43:39.542000 audit: BPF prog-id=221 op=LOAD Jan 14 01:43:39.542000 audit[4558]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4545 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130326163656137346132653834333961383735336238636437353462 Jan 14 01:43:39.604000 audit[4579]: NETFILTER_CFG table=filter:121 family=2 entries=16 op=nft_register_rule pid=4579 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:39.604000 audit[4579]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd067e4800 a2=0 a3=7ffd067e47ec items=0 ppid=2916 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.604000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:39.614000 audit[4579]: NETFILTER_CFG table=nat:122 family=2 entries=18 op=nft_register_rule pid=4579 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:39.614000 audit[4579]: SYSCALL arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7ffd067e4800 a2=0 a3=0 items=0 ppid=2916 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:39.623878 containerd[1600]: time="2026-01-14T01:43:39.623796971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rxwz,Uid:d97b54b4-c39b-4d54-a5a1-73190acb9e98,Namespace:kube-system,Attempt:0,} returns sandbox id \"a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f\"" Jan 14 01:43:39.625077 kubelet[2803]: E0114 01:43:39.625020 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:39.630095 containerd[1600]: time="2026-01-14T01:43:39.630074478Z" level=info msg="CreateContainer within sandbox \"a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:43:39.638790 containerd[1600]: time="2026-01-14T01:43:39.638350064Z" level=info msg="Container 0be81459234205b797150e8de777bd0ec2abe70ce7af17ca5f3b2c0b414c79d3: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:43:39.643135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount120031270.mount: Deactivated successfully. Jan 14 01:43:39.646446 containerd[1600]: time="2026-01-14T01:43:39.645834430Z" level=info msg="CreateContainer within sandbox \"a02acea74a2e8439a8753b8cd754bb8c08a1ef428abfe73bb041de651664ee0f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0be81459234205b797150e8de777bd0ec2abe70ce7af17ca5f3b2c0b414c79d3\"" Jan 14 01:43:39.646446 containerd[1600]: time="2026-01-14T01:43:39.646260590Z" level=info msg="StartContainer for \"0be81459234205b797150e8de777bd0ec2abe70ce7af17ca5f3b2c0b414c79d3\"" Jan 14 01:43:39.647612 containerd[1600]: time="2026-01-14T01:43:39.647591969Z" level=info msg="connecting to shim 0be81459234205b797150e8de777bd0ec2abe70ce7af17ca5f3b2c0b414c79d3" address="unix:///run/containerd/s/32da54e43b9baf1c440e7280648d3a65c3188f3e7d2c6423a8afd3503ca93cb9" protocol=ttrpc version=3 Jan 14 01:43:39.650000 audit[4588]: NETFILTER_CFG table=filter:123 family=2 entries=16 op=nft_register_rule pid=4588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:39.650000 audit[4588]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffea179c10 a2=0 a3=7fffea179bfc items=0 ppid=2916 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.650000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:39.655000 audit[4588]: NETFILTER_CFG table=nat:124 family=2 entries=18 op=nft_register_rule pid=4588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:39.655000 audit[4588]: SYSCALL arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7fffea179c10 a2=0 a3=0 items=0 ppid=2916 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:39.667801 systemd[1]: Started cri-containerd-0be81459234205b797150e8de777bd0ec2abe70ce7af17ca5f3b2c0b414c79d3.scope - libcontainer container 0be81459234205b797150e8de777bd0ec2abe70ce7af17ca5f3b2c0b414c79d3. Jan 14 01:43:39.696000 audit: BPF prog-id=222 op=LOAD Jan 14 01:43:39.696000 audit: BPF prog-id=223 op=LOAD Jan 14 01:43:39.696000 audit[4589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4545 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062653831343539323334323035623739373135306538646537373762 Jan 14 01:43:39.696000 audit: BPF prog-id=223 op=UNLOAD Jan 14 01:43:39.696000 audit[4589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4545 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062653831343539323334323035623739373135306538646537373762 Jan 14 01:43:39.697000 audit: BPF prog-id=224 op=LOAD Jan 14 01:43:39.697575 systemd-networkd[1502]: cali8f898ba669b: Gained IPv6LL Jan 14 01:43:39.697000 audit[4589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4545 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062653831343539323334323035623739373135306538646537373762 Jan 14 01:43:39.697000 audit: BPF prog-id=225 op=LOAD Jan 14 01:43:39.697000 audit[4589]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4545 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062653831343539323334323035623739373135306538646537373762 Jan 14 01:43:39.697000 audit: BPF prog-id=225 op=UNLOAD Jan 14 01:43:39.697000 audit[4589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4545 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062653831343539323334323035623739373135306538646537373762 Jan 14 01:43:39.697000 audit: BPF prog-id=224 op=UNLOAD Jan 14 01:43:39.697000 audit[4589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4545 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062653831343539323334323035623739373135306538646537373762 Jan 14 01:43:39.697000 audit: BPF prog-id=226 op=LOAD Jan 14 01:43:39.697000 audit[4589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4545 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:39.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062653831343539323334323035623739373135306538646537373762 Jan 14 01:43:39.724060 containerd[1600]: time="2026-01-14T01:43:39.723961061Z" level=info msg="StartContainer for \"0be81459234205b797150e8de777bd0ec2abe70ce7af17ca5f3b2c0b414c79d3\" returns successfully" Jan 14 01:43:40.321244 containerd[1600]: time="2026-01-14T01:43:40.321155612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b466d74c-vftwx,Uid:5131dab4-8de3-41fd-aa18-51b8b1928537,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:43:40.421598 systemd-networkd[1502]: calidaafba94deb: Link UP Jan 14 01:43:40.421809 systemd-networkd[1502]: calidaafba94deb: Gained carrier Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.347 [INFO][4637] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.358 [INFO][4637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0 calico-apiserver-8b466d74c- calico-apiserver 5131dab4-8de3-41fd-aa18-51b8b1928537 826 0 2026-01-14 01:43:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8b466d74c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-193-229 calico-apiserver-8b466d74c-vftwx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidaafba94deb [] [] }} ContainerID="76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-vftwx" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-" Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.358 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-vftwx" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0" Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.387 [INFO][4650] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" HandleID="k8s-pod-network.76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" Workload="172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0" Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.387 [INFO][4650] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" HandleID="k8s-pod-network.76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" Workload="172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-193-229", "pod":"calico-apiserver-8b466d74c-vftwx", "timestamp":"2026-01-14 01:43:40.387709409 +0000 UTC"}, Hostname:"172-239-193-229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.388 [INFO][4650] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.388 [INFO][4650] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.388 [INFO][4650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-229' Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.395 [INFO][4650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" host="172-239-193-229" Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.399 [INFO][4650] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-229" Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.402 [INFO][4650] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.404 [INFO][4650] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.406 [INFO][4650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.406 [INFO][4650] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" host="172-239-193-229" Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.408 [INFO][4650] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.411 [INFO][4650] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" host="172-239-193-229" Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.415 [INFO][4650] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.68.199/26] block=192.168.68.192/26 handle="k8s-pod-network.76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" host="172-239-193-229" Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.415 [INFO][4650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.199/26] handle="k8s-pod-network.76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" host="172-239-193-229" Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.415 [INFO][4650] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:43:40.433831 containerd[1600]: 2026-01-14 01:43:40.415 [INFO][4650] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.68.199/26] IPv6=[] ContainerID="76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" HandleID="k8s-pod-network.76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" Workload="172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0" Jan 14 01:43:40.434335 containerd[1600]: 2026-01-14 01:43:40.418 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-vftwx" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0", GenerateName:"calico-apiserver-8b466d74c-", Namespace:"calico-apiserver", SelfLink:"", UID:"5131dab4-8de3-41fd-aa18-51b8b1928537", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b466d74c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"", Pod:"calico-apiserver-8b466d74c-vftwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaafba94deb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:40.434335 containerd[1600]: 2026-01-14 01:43:40.419 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.199/32] ContainerID="76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-vftwx" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0" Jan 14 01:43:40.434335 containerd[1600]: 2026-01-14 01:43:40.419 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidaafba94deb ContainerID="76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-vftwx" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0" Jan 14 01:43:40.434335 containerd[1600]: 2026-01-14 01:43:40.421 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-vftwx" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0" Jan 14 01:43:40.434335 containerd[1600]: 2026-01-14 01:43:40.421 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-vftwx" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0", GenerateName:"calico-apiserver-8b466d74c-", Namespace:"calico-apiserver", SelfLink:"", UID:"5131dab4-8de3-41fd-aa18-51b8b1928537", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b466d74c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c", Pod:"calico-apiserver-8b466d74c-vftwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaafba94deb", MAC:"1e:10:91:1e:4a:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:40.434335 containerd[1600]: 2026-01-14 01:43:40.430 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" Namespace="calico-apiserver" Pod="calico-apiserver-8b466d74c-vftwx" WorkloadEndpoint="172--239--193--229-k8s-calico--apiserver--8b466d74c--vftwx-eth0" Jan 14 01:43:40.455502 containerd[1600]: time="2026-01-14T01:43:40.455435795Z" level=info msg="connecting to shim 76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c" address="unix:///run/containerd/s/b960f2ca42eff83885e65979208f672f94adfb0227bf289a3b85883bbfddc66a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:43:40.465666 systemd-networkd[1502]: cali57b31e1bda6: Gained IPv6LL Jan 14 01:43:40.502575 systemd[1]: Started cri-containerd-76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c.scope - libcontainer container 76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c. Jan 14 01:43:40.513000 audit: BPF prog-id=227 op=LOAD Jan 14 01:43:40.514000 audit: BPF prog-id=228 op=LOAD Jan 14 01:43:40.514000 audit[4681]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:40.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736616261353965663762366338343934396166633934333864386465 Jan 14 01:43:40.514000 audit: BPF prog-id=228 op=UNLOAD Jan 14 01:43:40.514000 audit[4681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:40.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736616261353965663762366338343934396166633934333864386465 Jan 14 01:43:40.514000 audit: BPF prog-id=229 op=LOAD Jan 14 01:43:40.514000 audit[4681]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:40.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736616261353965663762366338343934396166633934333864386465 Jan 14 01:43:40.514000 audit: BPF prog-id=230 op=LOAD Jan 14 01:43:40.514000 audit[4681]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:40.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736616261353965663762366338343934396166633934333864386465 Jan 14 01:43:40.514000 audit: BPF prog-id=230 op=UNLOAD Jan 14 01:43:40.514000 audit[4681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:40.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736616261353965663762366338343934396166633934333864386465 Jan 14 01:43:40.514000 audit: BPF prog-id=229 op=UNLOAD Jan 14 01:43:40.514000 audit[4681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:40.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736616261353965663762366338343934396166633934333864386465 Jan 14 01:43:40.514000 audit: BPF prog-id=231 op=LOAD Jan 14 01:43:40.514000 audit[4681]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=4670 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:40.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736616261353965663762366338343934396166633934333864386465 Jan 14 01:43:40.522165 kubelet[2803]: E0114 01:43:40.522143 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:40.522811 kubelet[2803]: E0114 01:43:40.522785 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:40.526528 kubelet[2803]: E0114 01:43:40.526466 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:43:40.526654 kubelet[2803]: E0114 01:43:40.526632 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:43:40.575685 containerd[1600]: time="2026-01-14T01:43:40.575469775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b466d74c-vftwx,Uid:5131dab4-8de3-41fd-aa18-51b8b1928537,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"76aba59ef7b6c84949afc9438d8de157fce6f58cc4602f4b4c660977ea137b7c\"" Jan 14 01:43:40.577998 kubelet[2803]: I0114 01:43:40.577955 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4rxwz" podStartSLOduration=37.577940544 podStartE2EDuration="37.577940544s" podCreationTimestamp="2026-01-14 01:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:43:40.575458505 +0000 UTC m=+41.368332812" watchObservedRunningTime="2026-01-14 01:43:40.577940544 +0000 UTC m=+41.370814861" Jan 14 01:43:40.582796 containerd[1600]: time="2026-01-14T01:43:40.582767762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:43:40.606000 audit[4708]: NETFILTER_CFG table=filter:125 family=2 entries=16 op=nft_register_rule pid=4708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:40.606000 audit[4708]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe8b394a90 a2=0 a3=7ffe8b394a7c items=0 ppid=2916 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:40.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:40.612000 audit[4708]: NETFILTER_CFG table=nat:126 family=2 entries=42 op=nft_register_rule pid=4708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:40.612000 audit[4708]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffe8b394a90 a2=0 a3=7ffe8b394a7c items=0 ppid=2916 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:40.612000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:40.721656 containerd[1600]: time="2026-01-14T01:43:40.721596112Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:40.722272 containerd[1600]: time="2026-01-14T01:43:40.722238412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:43:40.722386 containerd[1600]: time="2026-01-14T01:43:40.722278772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:40.722615 kubelet[2803]: E0114 01:43:40.722584 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:43:40.722684 kubelet[2803]: E0114 01:43:40.722628 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:43:40.722835 kubelet[2803]: E0114 01:43:40.722792 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sgnbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b466d74c-vftwx_calico-apiserver(5131dab4-8de3-41fd-aa18-51b8b1928537): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:40.724871 kubelet[2803]: E0114 01:43:40.724847 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:43:41.296664 systemd-networkd[1502]: calidfe939ad947: Gained IPv6LL Jan 14 01:43:41.323679 containerd[1600]: time="2026-01-14T01:43:41.323460571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8597978bc7-qzzjk,Uid:10b6b02c-a804-4455-980f-c8e7b004f89d,Namespace:calico-system,Attempt:0,}" Jan 14 01:43:41.423904 systemd-networkd[1502]: calia92407f2b7c: Link UP Jan 14 01:43:41.424601 systemd-networkd[1502]: calia92407f2b7c: Gained carrier Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.350 [INFO][4733] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.359 [INFO][4733] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0 calico-kube-controllers-8597978bc7- calico-system 10b6b02c-a804-4455-980f-c8e7b004f89d 831 0 2026-01-14 01:43:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8597978bc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-193-229 calico-kube-controllers-8597978bc7-qzzjk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia92407f2b7c [] [] }} ContainerID="ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" Namespace="calico-system" Pod="calico-kube-controllers-8597978bc7-qzzjk" WorkloadEndpoint="172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-" Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.360 [INFO][4733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" Namespace="calico-system" Pod="calico-kube-controllers-8597978bc7-qzzjk" WorkloadEndpoint="172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0" Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.385 [INFO][4741] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" HandleID="k8s-pod-network.ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" Workload="172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0" Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.385 [INFO][4741] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" HandleID="k8s-pod-network.ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" Workload="172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-229", "pod":"calico-kube-controllers-8597978bc7-qzzjk", "timestamp":"2026-01-14 01:43:41.38524785 +0000 UTC"}, Hostname:"172-239-193-229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.385 [INFO][4741] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.385 [INFO][4741] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.385 [INFO][4741] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-229' Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.391 [INFO][4741] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" host="172-239-193-229" Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.395 [INFO][4741] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-229" Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.399 [INFO][4741] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.400 [INFO][4741] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.403 [INFO][4741] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="172-239-193-229" Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.403 [INFO][4741] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" host="172-239-193-229" Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.406 [INFO][4741] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188 Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.410 [INFO][4741] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" host="172-239-193-229" Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.417 [INFO][4741] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.68.200/26] block=192.168.68.192/26 handle="k8s-pod-network.ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" host="172-239-193-229" Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.417 [INFO][4741] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.200/26] handle="k8s-pod-network.ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" host="172-239-193-229" Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.417 [INFO][4741] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:43:41.440853 containerd[1600]: 2026-01-14 01:43:41.417 [INFO][4741] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.68.200/26] IPv6=[] ContainerID="ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" HandleID="k8s-pod-network.ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" Workload="172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0" Jan 14 01:43:41.441744 containerd[1600]: 2026-01-14 01:43:41.419 [INFO][4733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" Namespace="calico-system" Pod="calico-kube-controllers-8597978bc7-qzzjk" WorkloadEndpoint="172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0", GenerateName:"calico-kube-controllers-8597978bc7-", Namespace:"calico-system", SelfLink:"", UID:"10b6b02c-a804-4455-980f-c8e7b004f89d", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8597978bc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"", Pod:"calico-kube-controllers-8597978bc7-qzzjk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.68.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia92407f2b7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:41.441744 containerd[1600]: 2026-01-14 01:43:41.420 [INFO][4733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.200/32] ContainerID="ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" Namespace="calico-system" Pod="calico-kube-controllers-8597978bc7-qzzjk" WorkloadEndpoint="172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0" Jan 14 01:43:41.441744 containerd[1600]: 2026-01-14 01:43:41.420 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia92407f2b7c ContainerID="ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" Namespace="calico-system" Pod="calico-kube-controllers-8597978bc7-qzzjk" WorkloadEndpoint="172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0" Jan 14 01:43:41.441744 containerd[1600]: 2026-01-14 01:43:41.423 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" Namespace="calico-system" Pod="calico-kube-controllers-8597978bc7-qzzjk" WorkloadEndpoint="172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0" Jan 14 01:43:41.441744 containerd[1600]: 2026-01-14 01:43:41.424 [INFO][4733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" Namespace="calico-system" Pod="calico-kube-controllers-8597978bc7-qzzjk" WorkloadEndpoint="172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0", GenerateName:"calico-kube-controllers-8597978bc7-", Namespace:"calico-system", SelfLink:"", UID:"10b6b02c-a804-4455-980f-c8e7b004f89d", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 43, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8597978bc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-229", ContainerID:"ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188", Pod:"calico-kube-controllers-8597978bc7-qzzjk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.68.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia92407f2b7c", MAC:"82:40:e9:25:43:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:43:41.441744 containerd[1600]: 2026-01-14 01:43:41.434 [INFO][4733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" Namespace="calico-system" Pod="calico-kube-controllers-8597978bc7-qzzjk" WorkloadEndpoint="172--239--193--229-k8s-calico--kube--controllers--8597978bc7--qzzjk-eth0" Jan 14 01:43:41.466840 containerd[1600]: time="2026-01-14T01:43:41.466661790Z" level=info msg="connecting to shim ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188" address="unix:///run/containerd/s/396ea752a0489f2b25a2ba1759efa95cfd3361339476cc72d511efebf80e70eb" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:43:41.497789 systemd[1]: Started cri-containerd-ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188.scope - libcontainer container ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188. Jan 14 01:43:41.516000 audit: BPF prog-id=232 op=LOAD Jan 14 01:43:41.516000 audit: BPF prog-id=233 op=LOAD Jan 14 01:43:41.516000 audit[4775]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:41.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303838346438393461613366643235366539353136643865363838 Jan 14 01:43:41.516000 audit: BPF prog-id=233 op=UNLOAD Jan 14 01:43:41.516000 audit[4775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:41.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303838346438393461613366643235366539353136643865363838 Jan 14 01:43:41.517000 audit: BPF prog-id=234 op=LOAD Jan 14 01:43:41.517000 audit[4775]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:41.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303838346438393461613366643235366539353136643865363838 Jan 14 01:43:41.517000 audit: BPF prog-id=235 op=LOAD Jan 14 01:43:41.517000 audit[4775]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:41.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303838346438393461613366643235366539353136643865363838 Jan 14 01:43:41.517000 audit: BPF prog-id=235 op=UNLOAD Jan 14 01:43:41.517000 audit[4775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:41.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303838346438393461613366643235366539353136643865363838 Jan 14 01:43:41.517000 audit: BPF prog-id=234 op=UNLOAD Jan 14 01:43:41.517000 audit[4775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:41.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303838346438393461613366643235366539353136643865363838 Jan 14 01:43:41.517000 audit: BPF prog-id=236 op=LOAD Jan 14 01:43:41.517000 audit[4775]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4764 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:41.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666303838346438393461613366643235366539353136643865363838 Jan 14 01:43:41.525963 kubelet[2803]: E0114 01:43:41.525940 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:41.528537 kubelet[2803]: E0114 01:43:41.528479 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:43:41.553703 systemd-networkd[1502]: calidaafba94deb: Gained IPv6LL Jan 14 01:43:41.560922 containerd[1600]: time="2026-01-14T01:43:41.560892172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8597978bc7-qzzjk,Uid:10b6b02c-a804-4455-980f-c8e7b004f89d,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff0884d894aa3fd256e9516d8e688819e6d13e4b2406f97e898110b828b3b188\"" Jan 14 01:43:41.562801 containerd[1600]: time="2026-01-14T01:43:41.562772161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:43:41.630000 audit[4806]: NETFILTER_CFG table=filter:127 family=2 entries=16 op=nft_register_rule pid=4806 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:41.630000 audit[4806]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc7b411940 a2=0 a3=7ffc7b41192c items=0 ppid=2916 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:41.630000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:41.640000 audit[4806]: NETFILTER_CFG table=nat:128 family=2 entries=54 op=nft_register_chain pid=4806 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:41.640000 audit[4806]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7ffc7b411940 a2=0 a3=7ffc7b41192c items=0 ppid=2916 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:41.640000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:41.701213 containerd[1600]: time="2026-01-14T01:43:41.701172572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:41.702254 containerd[1600]: time="2026-01-14T01:43:41.702221122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:43:41.702470 containerd[1600]: time="2026-01-14T01:43:41.702334532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:41.703130 kubelet[2803]: E0114 01:43:41.702605 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:43:41.703130 kubelet[2803]: E0114 01:43:41.702659 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:43:41.703130 kubelet[2803]: E0114 01:43:41.702825 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd7n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8597978bc7-qzzjk_calico-system(10b6b02c-a804-4455-980f-c8e7b004f89d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:41.704723 kubelet[2803]: E0114 01:43:41.704681 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:43:42.529210 kubelet[2803]: E0114 01:43:42.528922 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:42.530083 kubelet[2803]: E0114 01:43:42.530051 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:43:42.530162 kubelet[2803]: E0114 01:43:42.530126 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:43:43.408623 systemd-networkd[1502]: calia92407f2b7c: Gained IPv6LL Jan 14 01:43:43.531256 kubelet[2803]: E0114 01:43:43.531214 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:43:45.324265 containerd[1600]: time="2026-01-14T01:43:45.324214340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:43:45.459970 containerd[1600]: time="2026-01-14T01:43:45.459916362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:45.460849 containerd[1600]: time="2026-01-14T01:43:45.460813702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:43:45.460941 containerd[1600]: time="2026-01-14T01:43:45.460833602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:45.461035 kubelet[2803]: E0114 01:43:45.461003 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:43:45.461594 kubelet[2803]: E0114 01:43:45.461044 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:43:45.461594 kubelet[2803]: E0114 01:43:45.461148 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c32153cf5ee94e1085ad7bf9a7fbf30a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4dfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c4f8b6b9-9knmv_calico-system(587711a7-ed5a-468c-b6b8-7056f146431a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:45.463290 containerd[1600]: time="2026-01-14T01:43:45.463234001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:43:45.587691 containerd[1600]: time="2026-01-14T01:43:45.587535859Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:45.588721 containerd[1600]: time="2026-01-14T01:43:45.588671358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:43:45.589120 containerd[1600]: time="2026-01-14T01:43:45.588687028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:45.589161 kubelet[2803]: E0114 01:43:45.588889 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:43:45.589161 kubelet[2803]: E0114 01:43:45.588931 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:43:45.589161 kubelet[2803]: E0114 01:43:45.589041 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4dfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c4f8b6b9-9knmv_calico-system(587711a7-ed5a-468c-b6b8-7056f146431a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:45.590505 kubelet[2803]: E0114 01:43:45.590474 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:43:47.963964 kubelet[2803]: I0114 01:43:47.963389 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:43:47.963964 kubelet[2803]: E0114 01:43:47.963862 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:47.997000 audit[4936]: NETFILTER_CFG table=filter:129 family=2 entries=15 op=nft_register_rule pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:48.006214 kernel: kauditd_printk_skb: 218 callbacks suppressed Jan 14 01:43:48.006591 kernel: audit: type=1325 audit(1768355027.997:689): table=filter:129 family=2 entries=15 op=nft_register_rule pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:47.997000 audit[4936]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd83eceba0 a2=0 a3=7ffd83eceb8c items=0 ppid=2916 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.019460 kernel: audit: type=1300 audit(1768355027.997:689): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd83eceba0 a2=0 a3=7ffd83eceb8c items=0 ppid=2916 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.019545 kernel: audit: type=1327 audit(1768355027.997:689): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:47.997000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:48.017000 audit[4936]: NETFILTER_CFG table=nat:130 family=2 entries=25 op=nft_register_chain pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:48.021997 kernel: audit: type=1325 audit(1768355028.017:690): table=nat:130 family=2 entries=25 op=nft_register_chain pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:43:48.017000 audit[4936]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7ffd83eceba0 a2=0 a3=7ffd83eceb8c items=0 ppid=2916 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.027361 kernel: audit: type=1300 audit(1768355028.017:690): arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7ffd83eceba0 a2=0 a3=7ffd83eceb8c items=0 ppid=2916 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.017000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:48.035298 kernel: audit: type=1327 audit(1768355028.017:690): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:43:48.401000 audit: BPF prog-id=237 op=LOAD Jan 14 01:43:48.406441 kernel: audit: type=1334 audit(1768355028.401:691): prog-id=237 op=LOAD Jan 14 01:43:48.401000 audit[4985]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebb90b170 a2=98 a3=1fffffffffffffff items=0 ppid=4963 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.416445 kernel: audit: type=1300 audit(1768355028.401:691): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebb90b170 a2=98 a3=1fffffffffffffff items=0 ppid=4963 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.401000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:43:48.427461 kernel: audit: type=1327 audit(1768355028.401:691): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:43:48.405000 audit: BPF prog-id=237 op=UNLOAD Jan 14 01:43:48.434533 kernel: audit: type=1334 audit(1768355028.405:692): prog-id=237 op=UNLOAD Jan 14 01:43:48.405000 audit[4985]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffebb90b140 a3=0 items=0 ppid=4963 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.405000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:43:48.405000 audit: BPF prog-id=238 op=LOAD Jan 14 01:43:48.405000 audit[4985]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebb90b050 a2=94 a3=3 items=0 ppid=4963 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.405000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:43:48.405000 audit: BPF prog-id=238 op=UNLOAD Jan 14 01:43:48.405000 audit[4985]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffebb90b050 a2=94 a3=3 items=0 ppid=4963 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.405000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:43:48.405000 audit: BPF prog-id=239 op=LOAD Jan 14 01:43:48.405000 audit[4985]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebb90b090 a2=94 a3=7ffebb90b270 items=0 ppid=4963 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.405000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:43:48.405000 audit: BPF prog-id=239 op=UNLOAD Jan 14 01:43:48.405000 audit[4985]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffebb90b090 a2=94 a3=7ffebb90b270 items=0 ppid=4963 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.405000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:43:48.416000 audit: BPF prog-id=240 op=LOAD Jan 14 01:43:48.416000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffeeec6f60 a2=98 a3=3 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.416000 audit: BPF prog-id=240 op=UNLOAD Jan 14 01:43:48.416000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffeeec6f30 a3=0 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.416000 audit: BPF prog-id=241 op=LOAD Jan 14 01:43:48.416000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffeeec6d50 a2=94 a3=54428f items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.417000 audit: BPF prog-id=241 op=UNLOAD Jan 14 01:43:48.417000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffeeec6d50 a2=94 a3=54428f items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.417000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.417000 audit: BPF prog-id=242 op=LOAD Jan 14 01:43:48.417000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffeeec6d80 a2=94 a3=2 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.417000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.417000 audit: BPF prog-id=242 op=UNLOAD Jan 14 01:43:48.417000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffeeec6d80 a2=0 a3=2 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.417000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.543934 kubelet[2803]: E0114 01:43:48.543886 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:43:48.675000 audit: BPF prog-id=243 op=LOAD Jan 14 01:43:48.675000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffeeec6c40 a2=94 a3=1 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.675000 audit: BPF prog-id=243 op=UNLOAD Jan 14 01:43:48.675000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffeeec6c40 a2=94 a3=1 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.684000 audit: BPF prog-id=244 op=LOAD Jan 14 01:43:48.684000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffeeec6c30 a2=94 a3=4 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.684000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.684000 audit: BPF prog-id=244 op=UNLOAD Jan 14 01:43:48.684000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffeeec6c30 a2=0 a3=4 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.684000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.685000 audit: BPF prog-id=245 op=LOAD Jan 14 01:43:48.685000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffeeec6a90 a2=94 a3=5 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.685000 audit: BPF prog-id=245 op=UNLOAD Jan 14 01:43:48.685000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffeeec6a90 a2=0 a3=5 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.685000 audit: BPF prog-id=246 op=LOAD Jan 14 01:43:48.685000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffeeec6cb0 a2=94 a3=6 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.685000 audit: BPF prog-id=246 op=UNLOAD Jan 14 01:43:48.685000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffeeec6cb0 a2=0 a3=6 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.685000 audit: BPF prog-id=247 op=LOAD Jan 14 01:43:48.685000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffeeec6460 a2=94 a3=88 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.685000 audit: BPF prog-id=248 op=LOAD Jan 14 01:43:48.685000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffeeec62e0 a2=94 a3=2 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.685000 audit: BPF prog-id=248 op=UNLOAD Jan 14 01:43:48.685000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffeeec6310 a2=0 a3=7fffeeec6410 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.685000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.686000 audit: BPF prog-id=247 op=UNLOAD Jan 14 01:43:48.686000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=111f4d10 a2=0 a3=96e1b626bb503065 items=0 ppid=4963 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.686000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:43:48.697000 audit: BPF prog-id=249 op=LOAD Jan 14 01:43:48.697000 audit[4997]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe5a89430 a2=98 a3=1999999999999999 items=0 ppid=4963 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.697000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:43:48.697000 audit: BPF prog-id=249 op=UNLOAD Jan 14 01:43:48.697000 audit[4997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffe5a89400 a3=0 items=0 ppid=4963 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.697000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:43:48.697000 audit: BPF prog-id=250 op=LOAD Jan 14 01:43:48.697000 audit[4997]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe5a89310 a2=94 a3=ffff items=0 ppid=4963 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.697000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:43:48.697000 audit: BPF prog-id=250 op=UNLOAD Jan 14 01:43:48.697000 audit[4997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffe5a89310 a2=94 a3=ffff items=0 ppid=4963 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.697000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:43:48.697000 audit: BPF prog-id=251 op=LOAD Jan 14 01:43:48.697000 audit[4997]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe5a89350 a2=94 a3=7fffe5a89530 items=0 ppid=4963 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.697000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:43:48.697000 audit: BPF prog-id=251 op=UNLOAD Jan 14 01:43:48.697000 audit[4997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffe5a89350 a2=94 a3=7fffe5a89530 items=0 ppid=4963 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.697000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:43:48.772640 systemd-networkd[1502]: vxlan.calico: Link UP Jan 14 01:43:48.772655 systemd-networkd[1502]: vxlan.calico: Gained carrier Jan 14 01:43:48.809000 audit: BPF prog-id=252 op=LOAD Jan 14 01:43:48.809000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff85801a0 a2=98 a3=0 items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.809000 audit: BPF prog-id=252 op=UNLOAD Jan 14 01:43:48.809000 audit[5024]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff8580170 a3=0 items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.809000 audit: BPF prog-id=253 op=LOAD Jan 14 01:43:48.809000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff857ffb0 a2=94 a3=54428f items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.809000 audit: BPF prog-id=253 op=UNLOAD Jan 14 01:43:48.809000 audit[5024]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff857ffb0 a2=94 a3=54428f items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.809000 audit: BPF prog-id=254 op=LOAD Jan 14 01:43:48.809000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff857ffe0 a2=94 a3=2 items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.809000 audit: BPF prog-id=254 op=UNLOAD Jan 14 01:43:48.809000 audit[5024]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff857ffe0 a2=0 a3=2 items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.809000 audit: BPF prog-id=255 op=LOAD Jan 14 01:43:48.809000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff857fd90 a2=94 a3=4 items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.809000 audit: BPF prog-id=255 op=UNLOAD Jan 14 01:43:48.809000 audit[5024]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffff857fd90 a2=94 a3=4 items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.809000 audit: BPF prog-id=256 op=LOAD Jan 14 01:43:48.809000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff857fe90 a2=94 a3=7ffff8580010 items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.809000 audit: BPF prog-id=256 op=UNLOAD Jan 14 01:43:48.809000 audit[5024]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffff857fe90 a2=0 a3=7ffff8580010 items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.809000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.810000 audit: BPF prog-id=257 op=LOAD Jan 14 01:43:48.810000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff857f5c0 a2=94 a3=2 items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.810000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.810000 audit: BPF prog-id=257 op=UNLOAD Jan 14 01:43:48.810000 audit[5024]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffff857f5c0 a2=0 a3=2 items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.810000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.810000 audit: BPF prog-id=258 op=LOAD Jan 14 01:43:48.810000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff857f6c0 a2=94 a3=30 items=0 ppid=4963 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.810000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:43:48.820000 audit: BPF prog-id=259 op=LOAD Jan 14 01:43:48.820000 audit[5028]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe3493b630 a2=98 a3=0 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.820000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:48.820000 audit: BPF prog-id=259 op=UNLOAD Jan 14 01:43:48.820000 audit[5028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe3493b600 a3=0 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.820000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:48.820000 audit: BPF prog-id=260 op=LOAD Jan 14 01:43:48.820000 audit[5028]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe3493b420 a2=94 a3=54428f items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.820000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:48.820000 audit: BPF prog-id=260 op=UNLOAD Jan 14 01:43:48.820000 audit[5028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe3493b420 a2=94 a3=54428f items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.820000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:48.822000 audit: BPF prog-id=261 op=LOAD Jan 14 01:43:48.822000 audit[5028]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe3493b450 a2=94 a3=2 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.822000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:48.822000 audit: BPF prog-id=261 op=UNLOAD Jan 14 01:43:48.822000 audit[5028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe3493b450 a2=0 a3=2 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:48.822000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.033000 audit: BPF prog-id=262 op=LOAD Jan 14 01:43:49.033000 audit[5028]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe3493b310 a2=94 a3=1 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.033000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.033000 audit: BPF prog-id=262 op=UNLOAD Jan 14 01:43:49.033000 audit[5028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe3493b310 a2=94 a3=1 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.033000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.042000 audit: BPF prog-id=263 op=LOAD Jan 14 01:43:49.042000 audit[5028]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe3493b300 a2=94 a3=4 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.042000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.042000 audit: BPF prog-id=263 op=UNLOAD Jan 14 01:43:49.042000 audit[5028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe3493b300 a2=0 a3=4 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.042000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.043000 audit: BPF prog-id=264 op=LOAD Jan 14 01:43:49.043000 audit[5028]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe3493b160 a2=94 a3=5 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.043000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.043000 audit: BPF prog-id=264 op=UNLOAD Jan 14 01:43:49.043000 audit[5028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe3493b160 a2=0 a3=5 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.043000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.043000 audit: BPF prog-id=265 op=LOAD Jan 14 01:43:49.043000 audit[5028]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe3493b380 a2=94 a3=6 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.043000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.043000 audit: BPF prog-id=265 op=UNLOAD Jan 14 01:43:49.043000 audit[5028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe3493b380 a2=0 a3=6 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.043000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.044000 audit: BPF prog-id=266 op=LOAD Jan 14 01:43:49.044000 audit[5028]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe3493ab30 a2=94 a3=88 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.044000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.044000 audit: BPF prog-id=267 op=LOAD Jan 14 01:43:49.044000 audit[5028]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe3493a9b0 a2=94 a3=2 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.044000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.044000 audit: BPF prog-id=267 op=UNLOAD Jan 14 01:43:49.044000 audit[5028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe3493a9e0 a2=0 a3=7ffe3493aae0 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.044000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.044000 audit: BPF prog-id=266 op=UNLOAD Jan 14 01:43:49.044000 audit[5028]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=37ec6d10 a2=0 a3=e1432e39d2ab1854 items=0 ppid=4963 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.044000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:43:49.051000 audit: BPF prog-id=258 op=UNLOAD Jan 14 01:43:49.051000 audit[4963]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00059c080 a2=0 a3=0 items=0 ppid=3917 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.051000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:43:49.139000 audit[5060]: NETFILTER_CFG table=nat:131 family=2 entries=15 op=nft_register_chain pid=5060 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:43:49.139000 audit[5060]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffff7a8ab00 a2=0 a3=7ffff7a8aaec items=0 ppid=4963 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.139000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:43:49.140000 audit[5061]: NETFILTER_CFG table=mangle:132 family=2 entries=16 op=nft_register_chain pid=5061 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:43:49.140000 audit[5061]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffdabc1fa10 a2=0 a3=7ffdabc1f9fc items=0 ppid=4963 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.140000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:43:49.151000 audit[5058]: NETFILTER_CFG table=raw:133 family=2 entries=21 op=nft_register_chain pid=5058 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:43:49.151000 audit[5058]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe53a493b0 a2=0 a3=7ffe53a4939c items=0 ppid=4963 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.151000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:43:49.160000 audit[5064]: NETFILTER_CFG table=filter:134 family=2 entries=321 op=nft_register_chain pid=5064 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:43:49.160000 audit[5064]: SYSCALL arch=c000003e syscall=46 success=yes exit=190616 a0=3 a1=7ffc125866a0 a2=0 a3=7ffc1258668c items=0 ppid=4963 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:43:49.160000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:43:50.704528 systemd-networkd[1502]: vxlan.calico: Gained IPv6LL Jan 14 01:43:52.322114 containerd[1600]: time="2026-01-14T01:43:52.321862376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:43:52.617176 containerd[1600]: time="2026-01-14T01:43:52.616942827Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:52.618010 containerd[1600]: time="2026-01-14T01:43:52.617983291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:43:52.618160 containerd[1600]: time="2026-01-14T01:43:52.618049009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:52.618267 kubelet[2803]: E0114 01:43:52.618225 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:43:52.618979 kubelet[2803]: E0114 01:43:52.618274 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:43:52.618979 kubelet[2803]: E0114 01:43:52.618452 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vg8lt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b466d74c-r9454_calico-apiserver(467c90a2-bf12-4a6d-a6a3-0bb4155d4e42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:52.619867 kubelet[2803]: E0114 01:43:52.619827 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:43:55.323112 containerd[1600]: time="2026-01-14T01:43:55.323035584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:43:55.450276 containerd[1600]: time="2026-01-14T01:43:55.450227570Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:55.451308 containerd[1600]: time="2026-01-14T01:43:55.451257280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:43:55.451308 containerd[1600]: time="2026-01-14T01:43:55.451283580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:55.451602 kubelet[2803]: E0114 01:43:55.451543 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:43:55.452307 kubelet[2803]: E0114 01:43:55.451629 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:43:55.452307 kubelet[2803]: E0114 01:43:55.451922 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5hz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gg5g8_calico-system(27494ae0-0ad7-4d62-b447-69c7f55fa588): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:55.454453 containerd[1600]: time="2026-01-14T01:43:55.454401413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:43:55.580307 containerd[1600]: time="2026-01-14T01:43:55.580185498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:55.581069 containerd[1600]: time="2026-01-14T01:43:55.581041975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:43:55.581145 containerd[1600]: time="2026-01-14T01:43:55.581097123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:55.581226 kubelet[2803]: E0114 01:43:55.581195 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:43:55.581295 kubelet[2803]: E0114 01:43:55.581248 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:43:55.581703 kubelet[2803]: E0114 01:43:55.581393 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5hz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gg5g8_calico-system(27494ae0-0ad7-4d62-b447-69c7f55fa588): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:55.582619 kubelet[2803]: E0114 01:43:55.582571 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:43:56.323029 containerd[1600]: time="2026-01-14T01:43:56.322977937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:43:56.468656 containerd[1600]: time="2026-01-14T01:43:56.468575387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:56.477226 containerd[1600]: time="2026-01-14T01:43:56.477136890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:43:56.477357 containerd[1600]: time="2026-01-14T01:43:56.477257267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:56.477727 kubelet[2803]: E0114 01:43:56.477510 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:43:56.477727 kubelet[2803]: E0114 01:43:56.477583 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:43:56.478745 kubelet[2803]: E0114 01:43:56.477815 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sgnbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b466d74c-vftwx_calico-apiserver(5131dab4-8de3-41fd-aa18-51b8b1928537): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:56.479528 containerd[1600]: time="2026-01-14T01:43:56.478645421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:43:56.479739 kubelet[2803]: E0114 01:43:56.479660 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:43:56.626689 containerd[1600]: time="2026-01-14T01:43:56.626479431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:56.637293 containerd[1600]: time="2026-01-14T01:43:56.637101479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:43:56.637293 containerd[1600]: time="2026-01-14T01:43:56.637226586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:56.637549 kubelet[2803]: E0114 01:43:56.637491 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:43:56.637654 kubelet[2803]: E0114 01:43:56.637549 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:43:56.637800 kubelet[2803]: E0114 01:43:56.637714 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pglf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-l58pb_calico-system(79093d5d-07cf-4a25-a816-7eeb844e241f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:56.638957 kubelet[2803]: E0114 01:43:56.638914 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:43:57.321703 containerd[1600]: time="2026-01-14T01:43:57.321637856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:43:57.453380 containerd[1600]: time="2026-01-14T01:43:57.453202640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:43:57.454390 containerd[1600]: time="2026-01-14T01:43:57.454307403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:43:57.454390 containerd[1600]: time="2026-01-14T01:43:57.454336763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:43:57.454591 kubelet[2803]: E0114 01:43:57.454556 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:43:57.454636 kubelet[2803]: E0114 01:43:57.454596 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:43:57.454786 kubelet[2803]: E0114 01:43:57.454731 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd7n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8597978bc7-qzzjk_calico-system(10b6b02c-a804-4455-980f-c8e7b004f89d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:43:57.456109 kubelet[2803]: E0114 01:43:57.456063 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:44:00.323061 kubelet[2803]: E0114 01:44:00.322715 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:44:03.816332 kubelet[2803]: E0114 01:44:03.816277 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:44:05.322261 kubelet[2803]: E0114 01:44:05.322219 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:44:08.323350 kubelet[2803]: E0114 01:44:08.322870 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:44:08.324746 kubelet[2803]: E0114 01:44:08.324687 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:44:09.322274 kubelet[2803]: E0114 01:44:09.322193 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:44:11.322083 kubelet[2803]: E0114 01:44:11.321758 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:44:15.321438 containerd[1600]: time="2026-01-14T01:44:15.321387563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:44:15.468727 containerd[1600]: time="2026-01-14T01:44:15.468534058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:44:15.469770 containerd[1600]: time="2026-01-14T01:44:15.469656310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:44:15.469770 containerd[1600]: time="2026-01-14T01:44:15.469741449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:44:15.470064 kubelet[2803]: E0114 01:44:15.470023 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:44:15.470630 kubelet[2803]: E0114 01:44:15.470073 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:44:15.471712 kubelet[2803]: E0114 01:44:15.471638 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c32153cf5ee94e1085ad7bf9a7fbf30a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4dfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c4f8b6b9-9knmv_calico-system(587711a7-ed5a-468c-b6b8-7056f146431a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:44:15.474389 containerd[1600]: time="2026-01-14T01:44:15.474312291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:44:15.607915 containerd[1600]: time="2026-01-14T01:44:15.607787299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:44:15.608635 containerd[1600]: time="2026-01-14T01:44:15.608505413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:44:15.608635 containerd[1600]: time="2026-01-14T01:44:15.608591042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:44:15.608889 kubelet[2803]: E0114 01:44:15.608831 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:44:15.608889 kubelet[2803]: E0114 01:44:15.608880 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:44:15.609122 kubelet[2803]: E0114 01:44:15.609031 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4dfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c4f8b6b9-9knmv_calico-system(587711a7-ed5a-468c-b6b8-7056f146431a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:44:15.610544 kubelet[2803]: E0114 01:44:15.610487 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:44:17.325567 containerd[1600]: time="2026-01-14T01:44:17.325022162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:44:17.458882 containerd[1600]: time="2026-01-14T01:44:17.458621631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:44:17.459683 containerd[1600]: time="2026-01-14T01:44:17.459632783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:44:17.459773 containerd[1600]: time="2026-01-14T01:44:17.459718293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:44:17.459986 kubelet[2803]: E0114 01:44:17.459933 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:44:17.460475 kubelet[2803]: E0114 01:44:17.459988 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:44:17.460475 kubelet[2803]: E0114 01:44:17.460097 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vg8lt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b466d74c-r9454_calico-apiserver(467c90a2-bf12-4a6d-a6a3-0bb4155d4e42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:44:17.462482 kubelet[2803]: E0114 01:44:17.461561 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:44:19.322988 containerd[1600]: time="2026-01-14T01:44:19.322934575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:44:19.449001 containerd[1600]: time="2026-01-14T01:44:19.448948901Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:44:19.449959 containerd[1600]: time="2026-01-14T01:44:19.449915594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:44:19.450046 containerd[1600]: time="2026-01-14T01:44:19.449988194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:44:19.450295 kubelet[2803]: E0114 01:44:19.450220 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:44:19.450295 kubelet[2803]: E0114 01:44:19.450270 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:44:19.451485 kubelet[2803]: E0114 01:44:19.451439 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sgnbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b466d74c-vftwx_calico-apiserver(5131dab4-8de3-41fd-aa18-51b8b1928537): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:44:19.452140 containerd[1600]: time="2026-01-14T01:44:19.452086621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:44:19.453265 kubelet[2803]: E0114 01:44:19.453218 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:44:19.590604 containerd[1600]: time="2026-01-14T01:44:19.590400638Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:44:19.591565 containerd[1600]: time="2026-01-14T01:44:19.591534361Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:44:19.591662 containerd[1600]: time="2026-01-14T01:44:19.591607970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:44:19.591770 kubelet[2803]: E0114 01:44:19.591740 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:44:19.591838 kubelet[2803]: E0114 01:44:19.591785 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:44:19.591955 kubelet[2803]: E0114 01:44:19.591901 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd7n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8597978bc7-qzzjk_calico-system(10b6b02c-a804-4455-980f-c8e7b004f89d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:44:19.593369 kubelet[2803]: E0114 01:44:19.593340 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:44:20.320945 kubelet[2803]: E0114 01:44:20.320843 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:44:21.322696 containerd[1600]: time="2026-01-14T01:44:21.322653831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:44:21.452706 containerd[1600]: time="2026-01-14T01:44:21.452570905Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:44:21.454221 containerd[1600]: time="2026-01-14T01:44:21.454090916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:44:21.454221 containerd[1600]: time="2026-01-14T01:44:21.454180916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:44:21.454366 kubelet[2803]: E0114 01:44:21.454331 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:44:21.454822 kubelet[2803]: E0114 01:44:21.454380 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:44:21.454822 kubelet[2803]: E0114 01:44:21.454534 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5hz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gg5g8_calico-system(27494ae0-0ad7-4d62-b447-69c7f55fa588): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:44:21.459835 containerd[1600]: time="2026-01-14T01:44:21.459791244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:44:21.585867 containerd[1600]: time="2026-01-14T01:44:21.585693430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:44:21.586966 containerd[1600]: time="2026-01-14T01:44:21.586866093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:44:21.586966 containerd[1600]: time="2026-01-14T01:44:21.586944573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:44:21.587289 kubelet[2803]: E0114 01:44:21.587223 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:44:21.587505 kubelet[2803]: E0114 01:44:21.587266 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:44:21.587572 kubelet[2803]: E0114 01:44:21.587524 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5hz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gg5g8_calico-system(27494ae0-0ad7-4d62-b447-69c7f55fa588): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:44:21.589546 kubelet[2803]: E0114 01:44:21.589481 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:44:22.320929 kubelet[2803]: E0114 01:44:22.320893 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:44:26.325891 containerd[1600]: time="2026-01-14T01:44:26.325840968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:44:26.327433 kubelet[2803]: E0114 01:44:26.326389 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:44:26.469452 containerd[1600]: time="2026-01-14T01:44:26.469291740Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:44:26.470506 containerd[1600]: time="2026-01-14T01:44:26.470409725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:44:26.470612 containerd[1600]: time="2026-01-14T01:44:26.470542564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:44:26.471007 kubelet[2803]: E0114 01:44:26.470861 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:44:26.471081 kubelet[2803]: E0114 01:44:26.471021 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:44:26.471912 kubelet[2803]: E0114 01:44:26.471842 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pglf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-l58pb_calico-system(79093d5d-07cf-4a25-a816-7eeb844e241f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:44:26.473204 kubelet[2803]: E0114 01:44:26.473154 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:44:31.322690 kubelet[2803]: E0114 01:44:31.321796 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:44:31.322690 kubelet[2803]: E0114 01:44:31.322362 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:44:32.323452 kubelet[2803]: E0114 01:44:32.323390 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:44:33.321695 kubelet[2803]: E0114 01:44:33.321622 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:44:33.322641 kubelet[2803]: E0114 01:44:33.322516 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:44:34.321005 kubelet[2803]: E0114 01:44:34.320715 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:44:37.324085 kubelet[2803]: E0114 01:44:37.324034 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:44:38.320927 kubelet[2803]: E0114 01:44:38.320875 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:44:44.323071 kubelet[2803]: E0114 01:44:44.323030 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:44:44.323595 kubelet[2803]: E0114 01:44:44.323453 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:44:45.323441 kubelet[2803]: E0114 01:44:45.320615 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:44:46.322540 kubelet[2803]: E0114 01:44:46.321318 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:44:47.321739 kubelet[2803]: E0114 01:44:47.321705 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:44:48.321899 kubelet[2803]: E0114 01:44:48.321580 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:44:52.321652 kubelet[2803]: E0114 01:44:52.321588 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:44:56.322045 kubelet[2803]: E0114 01:44:56.321976 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:44:57.321900 kubelet[2803]: E0114 01:44:57.321540 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:44:59.322066 kubelet[2803]: E0114 01:44:59.320958 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:44:59.324533 kubelet[2803]: E0114 01:44:59.324371 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:44:59.326605 kubelet[2803]: E0114 01:44:59.326476 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:44:59.326856 containerd[1600]: time="2026-01-14T01:44:59.326821650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:44:59.327991 kubelet[2803]: E0114 01:44:59.327932 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:44:59.659748 containerd[1600]: time="2026-01-14T01:44:59.659569821Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:44:59.661112 containerd[1600]: time="2026-01-14T01:44:59.661017213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:44:59.661200 containerd[1600]: time="2026-01-14T01:44:59.661100986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:44:59.662552 kubelet[2803]: E0114 01:44:59.661394 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:44:59.662629 kubelet[2803]: E0114 01:44:59.662589 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:44:59.663108 kubelet[2803]: E0114 01:44:59.663055 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c32153cf5ee94e1085ad7bf9a7fbf30a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4dfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c4f8b6b9-9knmv_calico-system(587711a7-ed5a-468c-b6b8-7056f146431a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:44:59.666081 containerd[1600]: time="2026-01-14T01:44:59.666059840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:44:59.803781 containerd[1600]: time="2026-01-14T01:44:59.803708464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:44:59.805056 containerd[1600]: time="2026-01-14T01:44:59.804941251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:44:59.805056 containerd[1600]: time="2026-01-14T01:44:59.805026513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:44:59.805439 kubelet[2803]: E0114 01:44:59.805325 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:44:59.805439 kubelet[2803]: E0114 01:44:59.805385 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:44:59.805691 kubelet[2803]: E0114 01:44:59.805643 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4dfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c4f8b6b9-9knmv_calico-system(587711a7-ed5a-468c-b6b8-7056f146431a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:44:59.807116 kubelet[2803]: E0114 01:44:59.807054 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:45:05.321912 kubelet[2803]: E0114 01:45:05.321839 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:45:07.322541 containerd[1600]: time="2026-01-14T01:45:07.322379463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:45:07.465846 containerd[1600]: time="2026-01-14T01:45:07.465674737Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:45:07.466781 containerd[1600]: time="2026-01-14T01:45:07.466667309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:45:07.467554 containerd[1600]: time="2026-01-14T01:45:07.466768102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:45:07.468281 kubelet[2803]: E0114 01:45:07.467797 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:45:07.468281 kubelet[2803]: E0114 01:45:07.467842 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:45:07.468281 kubelet[2803]: E0114 01:45:07.467958 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vg8lt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b466d74c-r9454_calico-apiserver(467c90a2-bf12-4a6d-a6a3-0bb4155d4e42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:45:07.469447 kubelet[2803]: E0114 01:45:07.469351 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:45:09.324844 containerd[1600]: time="2026-01-14T01:45:09.324805564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:45:09.454870 containerd[1600]: time="2026-01-14T01:45:09.454796086Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:45:09.457700 containerd[1600]: time="2026-01-14T01:45:09.457587457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:45:09.459452 containerd[1600]: time="2026-01-14T01:45:09.457644058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:45:09.459526 kubelet[2803]: E0114 01:45:09.458164 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:45:09.459526 kubelet[2803]: E0114 01:45:09.458257 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:45:09.459526 kubelet[2803]: E0114 01:45:09.458534 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd7n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8597978bc7-qzzjk_calico-system(10b6b02c-a804-4455-980f-c8e7b004f89d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:45:09.460784 kubelet[2803]: E0114 01:45:09.460722 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:45:10.321443 containerd[1600]: time="2026-01-14T01:45:10.321367924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:45:10.453555 containerd[1600]: time="2026-01-14T01:45:10.453510936Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:45:10.454398 containerd[1600]: time="2026-01-14T01:45:10.454372514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:45:10.454506 containerd[1600]: time="2026-01-14T01:45:10.454445776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:45:10.454636 kubelet[2803]: E0114 01:45:10.454603 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:45:10.454682 kubelet[2803]: E0114 01:45:10.454666 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:45:10.454830 kubelet[2803]: E0114 01:45:10.454790 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5hz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gg5g8_calico-system(27494ae0-0ad7-4d62-b447-69c7f55fa588): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:45:10.456933 containerd[1600]: time="2026-01-14T01:45:10.456894949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:45:10.589754 containerd[1600]: time="2026-01-14T01:45:10.589222383Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:45:10.590515 containerd[1600]: time="2026-01-14T01:45:10.590384258Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:45:10.590660 containerd[1600]: time="2026-01-14T01:45:10.590385749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:45:10.590753 kubelet[2803]: E0114 01:45:10.590708 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:45:10.591065 kubelet[2803]: E0114 01:45:10.590756 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:45:10.591260 kubelet[2803]: E0114 01:45:10.591205 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5hz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gg5g8_calico-system(27494ae0-0ad7-4d62-b447-69c7f55fa588): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:45:10.592491 kubelet[2803]: E0114 01:45:10.592410 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:45:13.323448 containerd[1600]: time="2026-01-14T01:45:13.323238379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:45:13.327330 kubelet[2803]: E0114 01:45:13.325675 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:45:13.479854 containerd[1600]: time="2026-01-14T01:45:13.479785432Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:45:13.480719 containerd[1600]: time="2026-01-14T01:45:13.480676829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:45:13.480903 containerd[1600]: time="2026-01-14T01:45:13.480749201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:45:13.480990 kubelet[2803]: E0114 01:45:13.480916 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:45:13.480990 kubelet[2803]: E0114 01:45:13.480961 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:45:13.481171 kubelet[2803]: E0114 01:45:13.481078 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sgnbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8b466d74c-vftwx_calico-apiserver(5131dab4-8de3-41fd-aa18-51b8b1928537): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:45:13.482572 kubelet[2803]: E0114 01:45:13.482542 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:45:15.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.239.193.229:22-20.161.92.111:42850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:15.664939 systemd[1]: Started sshd@7-172.239.193.229:22-20.161.92.111:42850.service - OpenSSH per-connection server daemon (20.161.92.111:42850). Jan 14 01:45:15.666008 kernel: kauditd_printk_skb: 194 callbacks suppressed Jan 14 01:45:15.666052 kernel: audit: type=1130 audit(1768355115.664:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.239.193.229:22-20.161.92.111:42850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:15.837000 audit[5213]: USER_ACCT pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:15.845439 kernel: audit: type=1101 audit(1768355115.837:758): pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:15.846282 sshd[5213]: Accepted publickey for core from 20.161.92.111 port 42850 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:15.848994 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:15.847000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:15.860537 kernel: audit: type=1103 audit(1768355115.847:759): pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:15.866911 systemd-logind[1577]: New session 9 of user core. Jan 14 01:45:15.868627 kernel: audit: type=1006 audit(1768355115.847:760): pid=5213 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 14 01:45:15.847000 audit[5213]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc42ef7bf0 a2=3 a3=0 items=0 ppid=1 pid=5213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:15.879892 kernel: audit: type=1300 audit(1768355115.847:760): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc42ef7bf0 a2=3 a3=0 items=0 ppid=1 pid=5213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:15.847000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:15.883880 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:45:15.885789 kernel: audit: type=1327 audit(1768355115.847:760): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:15.891000 audit[5213]: USER_START pid=5213 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:15.900516 kernel: audit: type=1105 audit(1768355115.891:761): pid=5213 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:15.895000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:15.909441 kernel: audit: type=1103 audit(1768355115.895:762): pid=5217 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:16.057017 sshd[5217]: Connection closed by 20.161.92.111 port 42850 Jan 14 01:45:16.058747 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:16.061000 audit[5213]: USER_END pid=5213 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:16.065997 systemd[1]: sshd@7-172.239.193.229:22-20.161.92.111:42850.service: Deactivated successfully. Jan 14 01:45:16.067966 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:45:16.069956 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:45:16.071461 kernel: audit: type=1106 audit(1768355116.061:763): pid=5213 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:16.074353 systemd-logind[1577]: Removed session 9. Jan 14 01:45:16.062000 audit[5213]: CRED_DISP pid=5213 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:16.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.239.193.229:22-20.161.92.111:42850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:16.083523 kernel: audit: type=1104 audit(1768355116.062:764): pid=5213 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:20.322847 containerd[1600]: time="2026-01-14T01:45:20.322174951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:45:20.324026 kubelet[2803]: E0114 01:45:20.323323 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:45:20.469076 containerd[1600]: time="2026-01-14T01:45:20.469027904Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:45:20.471453 containerd[1600]: time="2026-01-14T01:45:20.471362583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:45:20.472864 containerd[1600]: time="2026-01-14T01:45:20.471471745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:45:20.473163 kubelet[2803]: E0114 01:45:20.473122 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:45:20.473258 kubelet[2803]: E0114 01:45:20.473173 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:45:20.473344 kubelet[2803]: E0114 01:45:20.473290 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pglf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-l58pb_calico-system(79093d5d-07cf-4a25-a816-7eeb844e241f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:45:20.474493 kubelet[2803]: E0114 01:45:20.474459 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:45:21.097097 systemd[1]: Started sshd@8-172.239.193.229:22-20.161.92.111:42856.service - OpenSSH per-connection server daemon (20.161.92.111:42856). Jan 14 01:45:21.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.239.193.229:22-20.161.92.111:42856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:21.099371 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:45:21.099429 kernel: audit: type=1130 audit(1768355121.096:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.239.193.229:22-20.161.92.111:42856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:21.276000 audit[5247]: USER_ACCT pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:21.279397 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:21.283256 sshd[5247]: Accepted publickey for core from 20.161.92.111 port 42856 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:21.284583 kernel: audit: type=1101 audit(1768355121.276:767): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:21.276000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:21.293973 systemd-logind[1577]: New session 10 of user core. Jan 14 01:45:21.299822 kernel: audit: type=1103 audit(1768355121.276:768): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:21.299914 kernel: audit: type=1006 audit(1768355121.276:769): pid=5247 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 01:45:21.276000 audit[5247]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff939bc2a0 a2=3 a3=0 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:21.276000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:21.329253 kubelet[2803]: E0114 01:45:21.328234 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:45:21.331342 kernel: audit: type=1300 audit(1768355121.276:769): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff939bc2a0 a2=3 a3=0 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:21.334681 kernel: audit: type=1327 audit(1768355121.276:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:21.332184 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:45:21.339000 audit[5247]: USER_START pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:21.348451 kernel: audit: type=1105 audit(1768355121.339:770): pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:21.346000 audit[5251]: CRED_ACQ pid=5251 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:21.356037 kernel: audit: type=1103 audit(1768355121.346:771): pid=5251 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:21.474118 sshd[5251]: Connection closed by 20.161.92.111 port 42856 Jan 14 01:45:21.474840 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:21.476000 audit[5247]: USER_END pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:21.481736 systemd[1]: sshd@8-172.239.193.229:22-20.161.92.111:42856.service: Deactivated successfully. Jan 14 01:45:21.484125 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:45:21.486559 kernel: audit: type=1106 audit(1768355121.476:772): pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:21.487505 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:45:21.477000 audit[5247]: CRED_DISP pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:21.491195 systemd-logind[1577]: Removed session 10. Jan 14 01:45:21.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.239.193.229:22-20.161.92.111:42856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:21.494456 kernel: audit: type=1104 audit(1768355121.477:773): pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:22.323453 kubelet[2803]: E0114 01:45:22.322860 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:45:26.505810 systemd[1]: Started sshd@9-172.239.193.229:22-20.161.92.111:48612.service - OpenSSH per-connection server daemon (20.161.92.111:48612). Jan 14 01:45:26.510214 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:45:26.510286 kernel: audit: type=1130 audit(1768355126.506:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.239.193.229:22-20.161.92.111:48612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:26.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.239.193.229:22-20.161.92.111:48612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:26.657000 audit[5270]: USER_ACCT pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:26.660395 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:26.666287 sshd[5270]: Accepted publickey for core from 20.161.92.111 port 48612 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:26.666597 kernel: audit: type=1101 audit(1768355126.657:776): pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:26.680089 kernel: audit: type=1103 audit(1768355126.658:777): pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:26.658000 audit[5270]: CRED_ACQ pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:26.687780 systemd-logind[1577]: New session 11 of user core. Jan 14 01:45:26.688474 kernel: audit: type=1006 audit(1768355126.658:778): pid=5270 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 01:45:26.658000 audit[5270]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4bdbe760 a2=3 a3=0 items=0 ppid=1 pid=5270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:26.700503 kernel: audit: type=1300 audit(1768355126.658:778): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4bdbe760 a2=3 a3=0 items=0 ppid=1 pid=5270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:26.700563 kernel: audit: type=1327 audit(1768355126.658:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:26.658000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:26.701792 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:45:26.707000 audit[5270]: USER_START pid=5270 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:26.717489 kernel: audit: type=1105 audit(1768355126.707:779): pid=5270 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:26.719000 audit[5274]: CRED_ACQ pid=5274 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:26.727447 kernel: audit: type=1103 audit(1768355126.719:780): pid=5274 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:26.836450 sshd[5274]: Connection closed by 20.161.92.111 port 48612 Jan 14 01:45:26.837142 sshd-session[5270]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:26.838000 audit[5270]: USER_END pid=5270 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:26.848440 kernel: audit: type=1106 audit(1768355126.838:781): pid=5270 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:26.838000 audit[5270]: CRED_DISP pid=5270 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:26.850232 systemd[1]: sshd@9-172.239.193.229:22-20.161.92.111:48612.service: Deactivated successfully. Jan 14 01:45:26.850844 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:45:26.856768 kernel: audit: type=1104 audit(1768355126.838:782): pid=5270 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:26.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.239.193.229:22-20.161.92.111:48612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:26.855114 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:45:26.875168 systemd[1]: Started sshd@10-172.239.193.229:22-20.161.92.111:48620.service - OpenSSH per-connection server daemon (20.161.92.111:48620). Jan 14 01:45:26.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.239.193.229:22-20.161.92.111:48620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:26.875992 systemd-logind[1577]: Removed session 11. Jan 14 01:45:27.027000 audit[5287]: USER_ACCT pid=5287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:27.029663 sshd[5287]: Accepted publickey for core from 20.161.92.111 port 48620 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:27.029000 audit[5287]: CRED_ACQ pid=5287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:27.029000 audit[5287]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2b4abea0 a2=3 a3=0 items=0 ppid=1 pid=5287 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:27.029000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:27.032748 sshd-session[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:27.041497 systemd-logind[1577]: New session 12 of user core. Jan 14 01:45:27.047767 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:45:27.053000 audit[5287]: USER_START pid=5287 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:27.055000 audit[5291]: CRED_ACQ pid=5291 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:27.210615 sshd[5291]: Connection closed by 20.161.92.111 port 48620 Jan 14 01:45:27.210948 sshd-session[5287]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:27.213000 audit[5287]: USER_END pid=5287 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:27.213000 audit[5287]: CRED_DISP pid=5287 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:27.216765 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:45:27.219730 systemd[1]: sshd@10-172.239.193.229:22-20.161.92.111:48620.service: Deactivated successfully. Jan 14 01:45:27.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.239.193.229:22-20.161.92.111:48620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:27.224274 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:45:27.228109 systemd-logind[1577]: Removed session 12. Jan 14 01:45:27.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.239.193.229:22-20.161.92.111:48626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:27.248655 systemd[1]: Started sshd@11-172.239.193.229:22-20.161.92.111:48626.service - OpenSSH per-connection server daemon (20.161.92.111:48626). Jan 14 01:45:27.324407 kubelet[2803]: E0114 01:45:27.323637 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:45:27.414000 audit[5304]: USER_ACCT pid=5304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:27.415662 sshd[5304]: Accepted publickey for core from 20.161.92.111 port 48626 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:27.416000 audit[5304]: CRED_ACQ pid=5304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:27.416000 audit[5304]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe655d4900 a2=3 a3=0 items=0 ppid=1 pid=5304 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:27.416000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:27.418137 sshd-session[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:27.425218 systemd-logind[1577]: New session 13 of user core. Jan 14 01:45:27.431568 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:45:27.435000 audit[5304]: USER_START pid=5304 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:27.439000 audit[5309]: CRED_ACQ pid=5309 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:27.572752 sshd[5309]: Connection closed by 20.161.92.111 port 48626 Jan 14 01:45:27.574284 sshd-session[5304]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:27.576000 audit[5304]: USER_END pid=5304 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:27.576000 audit[5304]: CRED_DISP pid=5304 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:27.580337 systemd[1]: sshd@11-172.239.193.229:22-20.161.92.111:48626.service: Deactivated successfully. Jan 14 01:45:27.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.239.193.229:22-20.161.92.111:48626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:27.585799 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:45:27.587638 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:45:27.590375 systemd-logind[1577]: Removed session 13. Jan 14 01:45:28.326493 kubelet[2803]: E0114 01:45:28.326358 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:45:31.325399 kubelet[2803]: E0114 01:45:31.324838 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:45:32.322224 kubelet[2803]: E0114 01:45:32.322181 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:45:32.608965 systemd[1]: Started sshd@12-172.239.193.229:22-20.161.92.111:59436.service - OpenSSH per-connection server daemon (20.161.92.111:59436). Jan 14 01:45:32.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.239.193.229:22-20.161.92.111:59436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:32.612592 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 01:45:32.612707 kernel: audit: type=1130 audit(1768355132.607:802): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.239.193.229:22-20.161.92.111:59436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:32.798000 audit[5321]: USER_ACCT pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:32.800687 sshd[5321]: Accepted publickey for core from 20.161.92.111 port 59436 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:32.801000 audit[5321]: CRED_ACQ pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:32.810670 kernel: audit: type=1101 audit(1768355132.798:803): pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:32.811385 kernel: audit: type=1103 audit(1768355132.801:804): pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:32.810859 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:32.818118 kernel: audit: type=1006 audit(1768355132.801:805): pid=5321 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 01:45:32.801000 audit[5321]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa3aa3700 a2=3 a3=0 items=0 ppid=1 pid=5321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:32.828840 systemd-logind[1577]: New session 14 of user core. Jan 14 01:45:32.835509 kernel: audit: type=1300 audit(1768355132.801:805): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa3aa3700 a2=3 a3=0 items=0 ppid=1 pid=5321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:32.801000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:32.839999 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:45:32.840700 kernel: audit: type=1327 audit(1768355132.801:805): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:32.844000 audit[5321]: USER_START pid=5321 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:32.860903 kernel: audit: type=1105 audit(1768355132.844:806): pid=5321 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:32.860000 audit[5325]: CRED_ACQ pid=5325 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:32.871438 kernel: audit: type=1103 audit(1768355132.860:807): pid=5325 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:32.988974 sshd[5325]: Connection closed by 20.161.92.111 port 59436 Jan 14 01:45:32.989963 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:32.991000 audit[5321]: USER_END pid=5321 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:32.997373 systemd[1]: sshd@12-172.239.193.229:22-20.161.92.111:59436.service: Deactivated successfully. Jan 14 01:45:33.001919 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:45:33.003512 kernel: audit: type=1106 audit(1768355132.991:808): pid=5321 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:33.004705 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:45:32.991000 audit[5321]: CRED_DISP pid=5321 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:33.009964 systemd-logind[1577]: Removed session 14. Jan 14 01:45:33.015634 kernel: audit: type=1104 audit(1768355132.991:809): pid=5321 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:32.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.239.193.229:22-20.161.92.111:59436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:33.030758 systemd[1]: Started sshd@13-172.239.193.229:22-20.161.92.111:59446.service - OpenSSH per-connection server daemon (20.161.92.111:59446). Jan 14 01:45:33.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.239.193.229:22-20.161.92.111:59446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:33.213867 sshd[5337]: Accepted publickey for core from 20.161.92.111 port 59446 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:33.212000 audit[5337]: USER_ACCT pid=5337 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:33.215000 audit[5337]: CRED_ACQ pid=5337 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:33.215000 audit[5337]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb6a47bd0 a2=3 a3=0 items=0 ppid=1 pid=5337 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:33.215000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:33.219032 sshd-session[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:33.229446 systemd-logind[1577]: New session 15 of user core. Jan 14 01:45:33.236531 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:45:33.242000 audit[5337]: USER_START pid=5337 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:33.245000 audit[5341]: CRED_ACQ pid=5341 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:33.322473 kubelet[2803]: E0114 01:45:33.322046 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:45:33.561068 sshd[5341]: Connection closed by 20.161.92.111 port 59446 Jan 14 01:45:33.562680 sshd-session[5337]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:33.564000 audit[5337]: USER_END pid=5337 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:33.566000 audit[5337]: CRED_DISP pid=5337 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:33.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.239.193.229:22-20.161.92.111:59446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:33.570720 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:45:33.572938 systemd[1]: sshd@13-172.239.193.229:22-20.161.92.111:59446.service: Deactivated successfully. Jan 14 01:45:33.576930 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:45:33.593482 systemd-logind[1577]: Removed session 15. Jan 14 01:45:33.594536 systemd[1]: Started sshd@14-172.239.193.229:22-20.161.92.111:59450.service - OpenSSH per-connection server daemon (20.161.92.111:59450). Jan 14 01:45:33.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.239.193.229:22-20.161.92.111:59450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:33.767000 audit[5351]: USER_ACCT pid=5351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:33.770059 sshd[5351]: Accepted publickey for core from 20.161.92.111 port 59450 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:33.770000 audit[5351]: CRED_ACQ pid=5351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:33.770000 audit[5351]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7fd49bb0 a2=3 a3=0 items=0 ppid=1 pid=5351 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:33.770000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:33.773279 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:33.792614 systemd-logind[1577]: New session 16 of user core. Jan 14 01:45:33.799586 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:45:33.803000 audit[5351]: USER_START pid=5351 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:33.806000 audit[5373]: CRED_ACQ pid=5373 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:34.321940 kubelet[2803]: E0114 01:45:34.321836 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:45:34.590000 audit[5389]: NETFILTER_CFG table=filter:135 family=2 entries=26 op=nft_register_rule pid=5389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:45:34.590000 audit[5389]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd69196470 a2=0 a3=7ffd6919645c items=0 ppid=2916 pid=5389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:34.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:45:34.595485 sshd[5373]: Connection closed by 20.161.92.111 port 59450 Jan 14 01:45:34.596922 sshd-session[5351]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:34.598000 audit[5351]: USER_END pid=5351 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:34.598000 audit[5351]: CRED_DISP pid=5351 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:34.605006 systemd[1]: sshd@14-172.239.193.229:22-20.161.92.111:59450.service: Deactivated successfully. Jan 14 01:45:34.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.239.193.229:22-20.161.92.111:59450 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:34.609107 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:45:34.614606 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:45:34.617562 systemd-logind[1577]: Removed session 16. Jan 14 01:45:34.616000 audit[5389]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:45:34.616000 audit[5389]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd69196470 a2=0 a3=0 items=0 ppid=2916 pid=5389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:34.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:45:34.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.239.193.229:22-20.161.92.111:59454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:34.629811 systemd[1]: Started sshd@15-172.239.193.229:22-20.161.92.111:59454.service - OpenSSH per-connection server daemon (20.161.92.111:59454). Jan 14 01:45:34.667000 audit[5398]: NETFILTER_CFG table=filter:137 family=2 entries=38 op=nft_register_rule pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:45:34.667000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffbc661100 a2=0 a3=7fffbc6610ec items=0 ppid=2916 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:34.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:45:34.711000 audit[5398]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:45:34.711000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffbc661100 a2=0 a3=0 items=0 ppid=2916 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:34.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:45:34.800000 audit[5394]: USER_ACCT pid=5394 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:34.802276 sshd[5394]: Accepted publickey for core from 20.161.92.111 port 59454 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:34.802000 audit[5394]: CRED_ACQ pid=5394 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:34.802000 audit[5394]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd794fb600 a2=3 a3=0 items=0 ppid=1 pid=5394 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:34.802000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:34.806032 sshd-session[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:34.814513 systemd-logind[1577]: New session 17 of user core. Jan 14 01:45:34.821634 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:45:34.827000 audit[5394]: USER_START pid=5394 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:34.830000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:35.072906 sshd[5400]: Connection closed by 20.161.92.111 port 59454 Jan 14 01:45:35.073623 sshd-session[5394]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:35.073000 audit[5394]: USER_END pid=5394 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:35.075000 audit[5394]: CRED_DISP pid=5394 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:35.079928 systemd[1]: sshd@15-172.239.193.229:22-20.161.92.111:59454.service: Deactivated successfully. Jan 14 01:45:35.080191 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:45:35.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.239.193.229:22-20.161.92.111:59454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:35.083060 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:45:35.087469 systemd-logind[1577]: Removed session 17. Jan 14 01:45:35.109265 systemd[1]: Started sshd@16-172.239.193.229:22-20.161.92.111:59458.service - OpenSSH per-connection server daemon (20.161.92.111:59458). Jan 14 01:45:35.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.239.193.229:22-20.161.92.111:59458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:35.292000 audit[5410]: USER_ACCT pid=5410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:35.293682 sshd[5410]: Accepted publickey for core from 20.161.92.111 port 59458 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:35.294000 audit[5410]: CRED_ACQ pid=5410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:35.295000 audit[5410]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebbf5e5b0 a2=3 a3=0 items=0 ppid=1 pid=5410 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:35.295000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:35.297653 sshd-session[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:35.310612 systemd-logind[1577]: New session 18 of user core. Jan 14 01:45:35.313984 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:45:35.318000 audit[5410]: USER_START pid=5410 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:35.326485 kubelet[2803]: E0114 01:45:35.326320 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:45:35.324000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:35.473184 sshd[5416]: Connection closed by 20.161.92.111 port 59458 Jan 14 01:45:35.475607 sshd-session[5410]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:35.475000 audit[5410]: USER_END pid=5410 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:35.475000 audit[5410]: CRED_DISP pid=5410 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:35.480476 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:45:35.482823 systemd[1]: sshd@16-172.239.193.229:22-20.161.92.111:59458.service: Deactivated successfully. Jan 14 01:45:35.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.239.193.229:22-20.161.92.111:59458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:35.486026 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:45:35.491200 systemd-logind[1577]: Removed session 18. Jan 14 01:45:40.303979 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 14 01:45:40.304156 kernel: audit: type=1325 audit(1768355140.296:851): table=filter:139 family=2 entries=26 op=nft_register_rule pid=5428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:45:40.296000 audit[5428]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=5428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:45:40.296000 audit[5428]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc63f0ffb0 a2=0 a3=7ffc63f0ff9c items=0 ppid=2916 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:40.296000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:45:40.314063 kernel: audit: type=1300 audit(1768355140.296:851): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc63f0ffb0 a2=0 a3=7ffc63f0ff9c items=0 ppid=2916 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:40.314123 kernel: audit: type=1327 audit(1768355140.296:851): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:45:40.317000 audit[5428]: NETFILTER_CFG table=nat:140 family=2 entries=104 op=nft_register_chain pid=5428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:45:40.324704 kubelet[2803]: E0114 01:45:40.324669 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:45:40.317000 audit[5428]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc63f0ffb0 a2=0 a3=7ffc63f0ff9c items=0 ppid=2916 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:40.326514 kernel: audit: type=1325 audit(1768355140.317:852): table=nat:140 family=2 entries=104 op=nft_register_chain pid=5428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:45:40.326578 kernel: audit: type=1300 audit(1768355140.317:852): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc63f0ffb0 a2=0 a3=7ffc63f0ff9c items=0 ppid=2916 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:40.317000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:45:40.346442 kernel: audit: type=1327 audit(1768355140.317:852): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:45:40.513331 systemd[1]: Started sshd@17-172.239.193.229:22-20.161.92.111:59474.service - OpenSSH per-connection server daemon (20.161.92.111:59474). Jan 14 01:45:40.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.239.193.229:22-20.161.92.111:59474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:40.521480 kernel: audit: type=1130 audit(1768355140.512:853): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.239.193.229:22-20.161.92.111:59474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:40.689000 audit[5430]: USER_ACCT pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:40.691463 sshd[5430]: Accepted publickey for core from 20.161.92.111 port 59474 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:40.696239 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:40.702522 kernel: audit: type=1101 audit(1768355140.689:854): pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:40.692000 audit[5430]: CRED_ACQ pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:40.708139 systemd-logind[1577]: New session 19 of user core. Jan 14 01:45:40.709799 kernel: audit: type=1103 audit(1768355140.692:855): pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:40.709873 kernel: audit: type=1006 audit(1768355140.692:856): pid=5430 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 01:45:40.692000 audit[5430]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda09f4e70 a2=3 a3=0 items=0 ppid=1 pid=5430 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:40.692000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:40.717575 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:45:40.723000 audit[5430]: USER_START pid=5430 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:40.725000 audit[5434]: CRED_ACQ pid=5434 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:40.845778 sshd[5434]: Connection closed by 20.161.92.111 port 59474 Jan 14 01:45:40.846574 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:40.849000 audit[5430]: USER_END pid=5430 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:40.849000 audit[5430]: CRED_DISP pid=5430 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:40.858361 systemd[1]: sshd@17-172.239.193.229:22-20.161.92.111:59474.service: Deactivated successfully. Jan 14 01:45:40.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.239.193.229:22-20.161.92.111:59474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:40.864067 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:45:40.865278 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:45:40.868057 systemd-logind[1577]: Removed session 19. Jan 14 01:45:41.326056 kubelet[2803]: E0114 01:45:41.325997 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:45:42.320849 kubelet[2803]: E0114 01:45:42.320802 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:45:43.326406 kubelet[2803]: E0114 01:45:43.326336 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:45:45.324444 kubelet[2803]: E0114 01:45:45.324317 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:45:45.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.239.193.229:22-20.161.92.111:35652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:45.880769 systemd[1]: Started sshd@18-172.239.193.229:22-20.161.92.111:35652.service - OpenSSH per-connection server daemon (20.161.92.111:35652). Jan 14 01:45:45.884373 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:45:45.884441 kernel: audit: type=1130 audit(1768355145.881:862): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.239.193.229:22-20.161.92.111:35652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:46.046000 audit[5446]: USER_ACCT pid=5446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:46.055015 sshd[5446]: Accepted publickey for core from 20.161.92.111 port 35652 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:46.055435 kernel: audit: type=1101 audit(1768355146.046:863): pid=5446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:46.057319 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:46.054000 audit[5446]: CRED_ACQ pid=5446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:46.071500 kernel: audit: type=1103 audit(1768355146.054:864): pid=5446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:46.074765 systemd-logind[1577]: New session 20 of user core. Jan 14 01:45:46.082114 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:45:46.082467 kernel: audit: type=1006 audit(1768355146.054:865): pid=5446 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 01:45:46.054000 audit[5446]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb025c0e0 a2=3 a3=0 items=0 ppid=1 pid=5446 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:46.097464 kernel: audit: type=1300 audit(1768355146.054:865): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb025c0e0 a2=3 a3=0 items=0 ppid=1 pid=5446 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:46.054000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:46.103434 kernel: audit: type=1327 audit(1768355146.054:865): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:46.086000 audit[5446]: USER_START pid=5446 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:46.113436 kernel: audit: type=1105 audit(1768355146.086:866): pid=5446 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:46.094000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:46.122712 kernel: audit: type=1103 audit(1768355146.094:867): pid=5450 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:46.221426 sshd[5450]: Connection closed by 20.161.92.111 port 35652 Jan 14 01:45:46.222459 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:46.224000 audit[5446]: USER_END pid=5446 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:46.229587 systemd[1]: sshd@18-172.239.193.229:22-20.161.92.111:35652.service: Deactivated successfully. Jan 14 01:45:46.229617 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:45:46.233949 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:45:46.239250 systemd-logind[1577]: Removed session 20. Jan 14 01:45:46.260487 kernel: audit: type=1106 audit(1768355146.224:868): pid=5446 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:46.224000 audit[5446]: CRED_DISP pid=5446 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:46.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.239.193.229:22-20.161.92.111:35652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:46.269204 kernel: audit: type=1104 audit(1768355146.224:869): pid=5446 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:48.320498 kubelet[2803]: E0114 01:45:48.320380 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:45:48.322236 kubelet[2803]: E0114 01:45:48.321294 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:45:48.322236 kubelet[2803]: E0114 01:45:48.321374 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:45:49.324188 kubelet[2803]: E0114 01:45:49.323830 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:45:49.325302 kubelet[2803]: E0114 01:45:49.325221 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588" Jan 14 01:45:50.320844 kubelet[2803]: E0114 01:45:50.320803 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Jan 14 01:45:51.272321 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:45:51.272445 kernel: audit: type=1130 audit(1768355151.262:871): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.239.193.229:22-20.161.92.111:35668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:51.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.239.193.229:22-20.161.92.111:35668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:51.263471 systemd[1]: Started sshd@19-172.239.193.229:22-20.161.92.111:35668.service - OpenSSH per-connection server daemon (20.161.92.111:35668). Jan 14 01:45:51.451000 audit[5461]: USER_ACCT pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:51.455136 sshd-session[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:51.455874 sshd[5461]: Accepted publickey for core from 20.161.92.111 port 35668 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:51.452000 audit[5461]: CRED_ACQ pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:51.462276 kernel: audit: type=1101 audit(1768355151.451:872): pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:51.462327 kernel: audit: type=1103 audit(1768355151.452:873): pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:51.462940 systemd-logind[1577]: New session 21 of user core. Jan 14 01:45:51.468951 kernel: audit: type=1006 audit(1768355151.452:874): pid=5461 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 01:45:51.452000 audit[5461]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7cc5f8f0 a2=3 a3=0 items=0 ppid=1 pid=5461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:51.474154 kernel: audit: type=1300 audit(1768355151.452:874): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7cc5f8f0 a2=3 a3=0 items=0 ppid=1 pid=5461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:51.474628 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:45:51.482435 kernel: audit: type=1327 audit(1768355151.452:874): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:51.452000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:51.482000 audit[5461]: USER_START pid=5461 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:51.487000 audit[5465]: CRED_ACQ pid=5465 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:51.496226 kernel: audit: type=1105 audit(1768355151.482:875): pid=5461 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:51.496292 kernel: audit: type=1103 audit(1768355151.487:876): pid=5465 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:51.606978 sshd[5465]: Connection closed by 20.161.92.111 port 35668 Jan 14 01:45:51.607655 sshd-session[5461]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:51.622455 kernel: audit: type=1106 audit(1768355151.610:877): pid=5461 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:51.610000 audit[5461]: USER_END pid=5461 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:51.623253 systemd[1]: sshd@19-172.239.193.229:22-20.161.92.111:35668.service: Deactivated successfully. Jan 14 01:45:51.625888 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:45:51.628675 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:45:51.611000 audit[5461]: CRED_DISP pid=5461 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:51.633200 systemd-logind[1577]: Removed session 21. Jan 14 01:45:51.637451 kernel: audit: type=1104 audit(1768355151.611:878): pid=5461 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:51.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.239.193.229:22-20.161.92.111:35668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:54.321851 kubelet[2803]: E0114 01:45:54.321667 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-vftwx" podUID="5131dab4-8de3-41fd-aa18-51b8b1928537" Jan 14 01:45:56.655891 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:45:56.656001 kernel: audit: type=1130 audit(1768355156.648:880): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.239.193.229:22-20.161.92.111:59576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:56.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.239.193.229:22-20.161.92.111:59576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:56.649941 systemd[1]: Started sshd@20-172.239.193.229:22-20.161.92.111:59576.service - OpenSSH per-connection server daemon (20.161.92.111:59576). Jan 14 01:45:56.876000 audit[5477]: USER_ACCT pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:56.880974 sshd-session[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:45:56.884078 sshd[5477]: Accepted publickey for core from 20.161.92.111 port 59576 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:45:56.877000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:56.889954 kernel: audit: type=1101 audit(1768355156.876:881): pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:56.890028 kernel: audit: type=1103 audit(1768355156.877:882): pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:56.893444 systemd-logind[1577]: New session 22 of user core. Jan 14 01:45:56.878000 audit[5477]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff38bfbd90 a2=3 a3=0 items=0 ppid=1 pid=5477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:56.906651 kernel: audit: type=1006 audit(1768355156.878:883): pid=5477 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 01:45:56.906775 kernel: audit: type=1300 audit(1768355156.878:883): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff38bfbd90 a2=3 a3=0 items=0 ppid=1 pid=5477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:45:56.878000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:56.913917 kernel: audit: type=1327 audit(1768355156.878:883): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:45:56.914092 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:45:56.919000 audit[5477]: USER_START pid=5477 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:56.922000 audit[5481]: CRED_ACQ pid=5481 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:56.931830 kernel: audit: type=1105 audit(1768355156.919:884): pid=5477 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:56.931931 kernel: audit: type=1103 audit(1768355156.922:885): pid=5481 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:57.069447 sshd[5481]: Connection closed by 20.161.92.111 port 59576 Jan 14 01:45:57.070685 sshd-session[5477]: pam_unix(sshd:session): session closed for user core Jan 14 01:45:57.084810 kernel: audit: type=1106 audit(1768355157.071:886): pid=5477 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:57.071000 audit[5477]: USER_END pid=5477 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:57.072000 audit[5477]: CRED_DISP pid=5477 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:57.094889 kernel: audit: type=1104 audit(1768355157.072:887): pid=5477 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:45:57.088326 systemd[1]: sshd@20-172.239.193.229:22-20.161.92.111:59576.service: Deactivated successfully. Jan 14 01:45:57.094811 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:45:57.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.239.193.229:22-20.161.92.111:59576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:45:57.099938 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:45:57.102236 systemd-logind[1577]: Removed session 22. Jan 14 01:45:57.324121 kubelet[2803]: E0114 01:45:57.324067 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79c4f8b6b9-9knmv" podUID="587711a7-ed5a-468c-b6b8-7056f146431a" Jan 14 01:45:59.321673 kubelet[2803]: E0114 01:45:59.321595 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-l58pb" podUID="79093d5d-07cf-4a25-a816-7eeb844e241f" Jan 14 01:46:01.322220 kubelet[2803]: E0114 01:46:01.321800 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8b466d74c-r9454" podUID="467c90a2-bf12-4a6d-a6a3-0bb4155d4e42" Jan 14 01:46:01.323350 kubelet[2803]: E0114 01:46:01.323315 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8597978bc7-qzzjk" podUID="10b6b02c-a804-4455-980f-c8e7b004f89d" Jan 14 01:46:02.112252 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:46:02.112388 kernel: audit: type=1130 audit(1768355162.102:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.239.193.229:22-20.161.92.111:59578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:46:02.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.239.193.229:22-20.161.92.111:59578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:46:02.103669 systemd[1]: Started sshd@21-172.239.193.229:22-20.161.92.111:59578.service - OpenSSH per-connection server daemon (20.161.92.111:59578). Jan 14 01:46:02.270000 audit[5494]: USER_ACCT pid=5494 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:46:02.274605 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:46:02.278051 sshd[5494]: Accepted publickey for core from 20.161.92.111 port 59578 ssh2: RSA SHA256:GEKeq0ZQ+ZKIUvRmHKqM/0GWiNRvezZxn9Dli11ow8U Jan 14 01:46:02.270000 audit[5494]: CRED_ACQ pid=5494 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:46:02.281864 kernel: audit: type=1101 audit(1768355162.270:890): pid=5494 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:46:02.281927 kernel: audit: type=1103 audit(1768355162.270:891): pid=5494 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:46:02.285267 systemd-logind[1577]: New session 23 of user core. Jan 14 01:46:02.288977 kernel: audit: type=1006 audit(1768355162.270:892): pid=5494 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 01:46:02.270000 audit[5494]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9c7e79d0 a2=3 a3=0 items=0 ppid=1 pid=5494 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:46:02.292873 kernel: audit: type=1300 audit(1768355162.270:892): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9c7e79d0 a2=3 a3=0 items=0 ppid=1 pid=5494 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:46:02.293704 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:46:02.270000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:46:02.301459 kernel: audit: type=1327 audit(1768355162.270:892): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:46:02.301000 audit[5494]: USER_START pid=5494 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:46:02.318673 kernel: audit: type=1105 audit(1768355162.301:893): pid=5494 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:46:02.318729 kernel: audit: type=1103 audit(1768355162.310:894): pid=5498 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:46:02.310000 audit[5498]: CRED_ACQ pid=5498 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:46:02.420439 sshd[5498]: Connection closed by 20.161.92.111 port 59578 Jan 14 01:46:02.420600 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Jan 14 01:46:02.433448 kernel: audit: type=1106 audit(1768355162.421:895): pid=5494 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:46:02.421000 audit[5494]: USER_END pid=5494 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:46:02.421000 audit[5494]: CRED_DISP pid=5494 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:46:02.435187 systemd[1]: sshd@21-172.239.193.229:22-20.161.92.111:59578.service: Deactivated successfully. Jan 14 01:46:02.441668 kernel: audit: type=1104 audit(1768355162.421:896): pid=5494 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:46:02.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.239.193.229:22-20.161.92.111:59578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:46:02.441862 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:46:02.446300 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:46:02.448348 systemd-logind[1577]: Removed session 23. Jan 14 01:46:04.321899 kubelet[2803]: E0114 01:46:04.321740 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gg5g8" podUID="27494ae0-0ad7-4d62-b447-69c7f55fa588"