Dec 12 18:12:48.205313 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 12 18:12:48.205337 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:12:48.205347 kernel: BIOS-provided physical RAM map: Dec 12 18:12:48.205354 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Dec 12 18:12:48.205361 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Dec 12 18:12:48.205367 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:12:48.205377 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 12 18:12:48.205385 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 12 18:12:48.205392 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 12 18:12:48.205399 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 12 18:12:48.205406 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:12:48.205413 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:12:48.205420 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Dec 12 18:12:48.205427 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:12:48.205438 kernel: NX (Execute Disable) protection: active Dec 12 18:12:48.205446 kernel: APIC: Static calls initialized Dec 12 18:12:48.205454 kernel: SMBIOS 2.8 present. Dec 12 18:12:48.205462 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Dec 12 18:12:48.205469 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:12:48.205479 kernel: Hypervisor detected: KVM Dec 12 18:12:48.205487 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:12:48.205495 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:12:48.207551 kernel: kvm-clock: using sched offset of 6164656930 cycles Dec 12 18:12:48.207567 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:12:48.207576 kernel: tsc: Detected 2000.000 MHz processor Dec 12 18:12:48.207585 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:12:48.207593 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:12:48.207604 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Dec 12 18:12:48.207612 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:12:48.207620 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:12:48.207628 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:12:48.207635 kernel: Using GB pages for direct mapping Dec 12 18:12:48.207643 kernel: ACPI: Early table checksum verification disabled Dec 12 18:12:48.207651 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Dec 12 18:12:48.207659 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:12:48.207669 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:12:48.207677 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:12:48.207685 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 12 18:12:48.207692 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:12:48.207700 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:12:48.207711 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:12:48.207722 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:12:48.207730 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Dec 12 18:12:48.207738 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Dec 12 18:12:48.207746 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 12 18:12:48.207755 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Dec 12 18:12:48.207765 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Dec 12 18:12:48.207773 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Dec 12 18:12:48.207781 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Dec 12 18:12:48.207789 kernel: No NUMA configuration found Dec 12 18:12:48.207797 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Dec 12 18:12:48.207805 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Dec 12 18:12:48.207822 kernel: Zone ranges: Dec 12 18:12:48.207832 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:12:48.207840 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 12 18:12:48.207848 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:12:48.207856 kernel: Device empty Dec 12 18:12:48.207864 kernel: Movable zone start for each node Dec 12 18:12:48.207872 kernel: Early memory node ranges Dec 12 18:12:48.207880 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:12:48.207888 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 12 18:12:48.207898 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:12:48.207906 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Dec 12 18:12:48.207914 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:12:48.207922 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:12:48.207930 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Dec 12 18:12:48.207938 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:12:48.207947 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:12:48.207957 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:12:48.207965 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:12:48.207973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:12:48.207981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:12:48.207989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:12:48.207997 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:12:48.208005 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:12:48.208013 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:12:48.208024 kernel: TSC deadline timer available Dec 12 18:12:48.208032 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:12:48.208040 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:12:48.208048 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:12:48.208056 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:12:48.208064 kernel: CPU topo: Num. cores per package: 2 Dec 12 18:12:48.208072 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:12:48.208081 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:12:48.208089 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:12:48.208097 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:12:48.208105 kernel: kvm-guest: setup PV sched yield Dec 12 18:12:48.208113 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 12 18:12:48.208121 kernel: Booting paravirtualized kernel on KVM Dec 12 18:12:48.208130 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:12:48.208138 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:12:48.208148 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:12:48.208156 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:12:48.208164 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:12:48.208172 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:12:48.208180 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:12:48.208189 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:12:48.208200 kernel: random: crng init done Dec 12 18:12:48.208208 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:12:48.208216 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:12:48.208224 kernel: Fallback order for Node 0: 0 Dec 12 18:12:48.208232 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Dec 12 18:12:48.208240 kernel: Policy zone: Normal Dec 12 18:12:48.208248 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:12:48.208256 kernel: software IO TLB: area num 2. Dec 12 18:12:48.208266 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:12:48.208274 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:12:48.208284 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:12:48.208297 kernel: Dynamic Preempt: voluntary Dec 12 18:12:48.208309 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:12:48.208321 kernel: rcu: RCU event tracing is enabled. Dec 12 18:12:48.208332 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:12:48.208343 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:12:48.208351 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:12:48.208359 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:12:48.208367 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:12:48.208375 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:12:48.208385 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:12:48.208409 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:12:48.208417 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:12:48.208426 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:12:48.208434 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:12:48.208445 kernel: Console: colour VGA+ 80x25 Dec 12 18:12:48.208454 kernel: printk: legacy console [tty0] enabled Dec 12 18:12:48.208462 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:12:48.208471 kernel: ACPI: Core revision 20240827 Dec 12 18:12:48.208482 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:12:48.208490 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:12:48.208499 kernel: x2apic enabled Dec 12 18:12:48.208526 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:12:48.208535 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:12:48.208544 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:12:48.208552 kernel: kvm-guest: setup PV IPIs Dec 12 18:12:48.208563 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:12:48.208572 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 12 18:12:48.208581 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Dec 12 18:12:48.208589 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:12:48.208598 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 12 18:12:48.208606 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 12 18:12:48.208615 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:12:48.208625 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:12:48.208634 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:12:48.208643 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:12:48.208651 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:12:48.208660 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:12:48.208668 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 12 18:12:48.208678 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 12 18:12:48.208688 kernel: active return thunk: srso_alias_return_thunk Dec 12 18:12:48.208697 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 12 18:12:48.208705 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Dec 12 18:12:48.208714 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:12:48.208723 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:12:48.208731 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:12:48.208742 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:12:48.208750 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 12 18:12:48.208759 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:12:48.208767 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Dec 12 18:12:48.208776 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Dec 12 18:12:48.208784 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:12:48.208793 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:12:48.208801 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:12:48.208812 kernel: landlock: Up and running. Dec 12 18:12:48.208820 kernel: SELinux: Initializing. Dec 12 18:12:48.208828 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:12:48.208837 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:12:48.208846 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Dec 12 18:12:48.208872 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 12 18:12:48.208882 kernel: ... version: 0 Dec 12 18:12:48.208893 kernel: ... bit width: 48 Dec 12 18:12:48.208901 kernel: ... generic registers: 6 Dec 12 18:12:48.208909 kernel: ... value mask: 0000ffffffffffff Dec 12 18:12:48.208918 kernel: ... max period: 00007fffffffffff Dec 12 18:12:48.208926 kernel: ... fixed-purpose events: 0 Dec 12 18:12:48.208935 kernel: ... event mask: 000000000000003f Dec 12 18:12:48.208943 kernel: signal: max sigframe size: 3376 Dec 12 18:12:48.208954 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:12:48.208963 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:12:48.208971 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:12:48.208980 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:12:48.208988 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:12:48.208997 kernel: .... node #0, CPUs: #1 Dec 12 18:12:48.209005 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:12:48.209016 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Dec 12 18:12:48.209024 kernel: Memory: 3979480K/4193772K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 208864K reserved, 0K cma-reserved) Dec 12 18:12:48.209033 kernel: devtmpfs: initialized Dec 12 18:12:48.209041 kernel: x86/mm: Memory block size: 128MB Dec 12 18:12:48.209050 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:12:48.209059 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:12:48.209067 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:12:48.209078 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:12:48.209086 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:12:48.209095 kernel: audit: type=2000 audit(1765563165.303:1): state=initialized audit_enabled=0 res=1 Dec 12 18:12:48.209103 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:12:48.209112 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:12:48.209120 kernel: cpuidle: using governor menu Dec 12 18:12:48.209129 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:12:48.209139 kernel: dca service started, version 1.12.1 Dec 12 18:12:48.209148 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 12 18:12:48.209156 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 12 18:12:48.209165 kernel: PCI: Using configuration type 1 for base access Dec 12 18:12:48.209173 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:12:48.209182 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:12:48.209190 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:12:48.209200 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:12:48.209209 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:12:48.209217 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:12:48.209225 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:12:48.209233 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:12:48.209242 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:12:48.209250 kernel: ACPI: Interpreter enabled Dec 12 18:12:48.209260 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:12:48.209268 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:12:48.209277 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:12:48.209285 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:12:48.209293 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:12:48.209301 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:12:48.209664 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:12:48.209983 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:12:48.210172 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:12:48.210183 kernel: PCI host bridge to bus 0000:00 Dec 12 18:12:48.210365 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:12:48.210566 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:12:48.210745 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:12:48.210909 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 12 18:12:48.211071 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 12 18:12:48.211232 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Dec 12 18:12:48.211394 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:12:48.211622 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:12:48.211822 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:12:48.212001 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 12 18:12:48.212175 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 12 18:12:48.212355 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 12 18:12:48.213152 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:12:48.213369 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:12:48.213602 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Dec 12 18:12:48.213785 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 12 18:12:48.213962 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 12 18:12:48.214147 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:12:48.214323 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 12 18:12:48.214523 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 12 18:12:48.214707 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 12 18:12:48.214884 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 12 18:12:48.215068 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:12:48.215243 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:12:48.215429 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:12:48.216221 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Dec 12 18:12:48.216418 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Dec 12 18:12:48.216630 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:12:48.216809 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 12 18:12:48.216821 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:12:48.216831 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:12:48.216843 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:12:48.216852 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:12:48.216861 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:12:48.216869 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:12:48.216878 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:12:48.216886 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:12:48.216895 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:12:48.216905 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:12:48.216914 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:12:48.216923 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:12:48.216931 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:12:48.216940 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:12:48.216948 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:12:48.216957 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:12:48.216967 kernel: iommu: Default domain type: Translated Dec 12 18:12:48.216976 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:12:48.216984 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:12:48.216993 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:12:48.217001 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Dec 12 18:12:48.217010 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 12 18:12:48.217183 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:12:48.217389 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:12:48.218236 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:12:48.218254 kernel: vgaarb: loaded Dec 12 18:12:48.218265 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:12:48.218275 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:12:48.218284 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:12:48.218293 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:12:48.218306 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:12:48.218315 kernel: pnp: PnP ACPI init Dec 12 18:12:48.218536 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 12 18:12:48.218551 kernel: pnp: PnP ACPI: found 5 devices Dec 12 18:12:48.218561 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:12:48.218570 kernel: NET: Registered PF_INET protocol family Dec 12 18:12:48.218583 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:12:48.218592 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 18:12:48.218601 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:12:48.218610 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:12:48.218618 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 18:12:48.218627 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 18:12:48.218636 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:12:48.218648 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:12:48.218657 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:12:48.218666 kernel: NET: Registered PF_XDP protocol family Dec 12 18:12:48.218835 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:12:48.219001 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:12:48.219164 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:12:48.219326 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 12 18:12:48.219492 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 12 18:12:48.219752 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Dec 12 18:12:48.219766 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:12:48.219775 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:12:48.219784 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Dec 12 18:12:48.219793 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 12 18:12:48.219802 kernel: Initialise system trusted keyrings Dec 12 18:12:48.219815 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 18:12:48.219824 kernel: Key type asymmetric registered Dec 12 18:12:48.219919 kernel: Asymmetric key parser 'x509' registered Dec 12 18:12:48.219933 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:12:48.219943 kernel: io scheduler mq-deadline registered Dec 12 18:12:48.219952 kernel: io scheduler kyber registered Dec 12 18:12:48.219961 kernel: io scheduler bfq registered Dec 12 18:12:48.219972 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:12:48.219982 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:12:48.219991 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:12:48.220000 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:12:48.220009 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:12:48.220018 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:12:48.220027 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:12:48.220038 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:12:48.220047 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:12:48.220240 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:12:48.220413 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:12:48.220618 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:12:46 UTC (1765563166) Dec 12 18:12:48.220792 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 12 18:12:48.220807 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 12 18:12:48.220816 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:12:48.220825 kernel: Segment Routing with IPv6 Dec 12 18:12:48.220834 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:12:48.220843 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:12:48.220852 kernel: Key type dns_resolver registered Dec 12 18:12:48.220861 kernel: IPI shorthand broadcast: enabled Dec 12 18:12:48.220872 kernel: sched_clock: Marking stable (1806004380, 343488430)->(2247770780, -98277970) Dec 12 18:12:48.220881 kernel: registered taskstats version 1 Dec 12 18:12:48.220890 kernel: Loading compiled-in X.509 certificates Dec 12 18:12:48.220898 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 12 18:12:48.220907 kernel: Demotion targets for Node 0: null Dec 12 18:12:48.220916 kernel: Key type .fscrypt registered Dec 12 18:12:48.220925 kernel: Key type fscrypt-provisioning registered Dec 12 18:12:48.220936 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:12:48.220945 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:12:48.220954 kernel: ima: No architecture policies found Dec 12 18:12:48.220962 kernel: clk: Disabling unused clocks Dec 12 18:12:48.220972 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 12 18:12:48.220981 kernel: Write protecting the kernel read-only data: 45056k Dec 12 18:12:48.220990 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 12 18:12:48.221000 kernel: Run /init as init process Dec 12 18:12:48.221009 kernel: with arguments: Dec 12 18:12:48.221018 kernel: /init Dec 12 18:12:48.221026 kernel: with environment: Dec 12 18:12:48.221035 kernel: HOME=/ Dec 12 18:12:48.221061 kernel: TERM=linux Dec 12 18:12:48.221072 kernel: SCSI subsystem initialized Dec 12 18:12:48.221081 kernel: libata version 3.00 loaded. Dec 12 18:12:48.221262 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:12:48.221274 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:12:48.221449 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:12:48.222221 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:12:48.222402 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:12:48.222660 kernel: scsi host0: ahci Dec 12 18:12:48.222860 kernel: scsi host1: ahci Dec 12 18:12:48.223049 kernel: scsi host2: ahci Dec 12 18:12:48.223373 kernel: scsi host3: ahci Dec 12 18:12:48.224021 kernel: scsi host4: ahci Dec 12 18:12:48.224263 kernel: scsi host5: ahci Dec 12 18:12:48.224282 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Dec 12 18:12:48.224292 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Dec 12 18:12:48.224302 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Dec 12 18:12:48.224311 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Dec 12 18:12:48.224323 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Dec 12 18:12:48.224332 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Dec 12 18:12:48.224344 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 12 18:12:48.224353 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:12:48.224362 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:12:48.224371 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:12:48.224380 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:12:48.224390 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:12:48.224629 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Dec 12 18:12:48.224871 kernel: scsi host6: Virtio SCSI HBA Dec 12 18:12:48.225103 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 12 18:12:48.225317 kernel: sd 6:0:0:0: Power-on or device reset occurred Dec 12 18:12:48.225573 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Dec 12 18:12:48.225826 kernel: sd 6:0:0:0: [sda] Write Protect is off Dec 12 18:12:48.226033 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 12 18:12:48.226235 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 12 18:12:48.226247 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:12:48.226257 kernel: GPT:25804799 != 167739391 Dec 12 18:12:48.226267 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:12:48.226276 kernel: GPT:25804799 != 167739391 Dec 12 18:12:48.226285 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:12:48.226297 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:12:48.226490 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Dec 12 18:12:48.226502 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:12:48.226529 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:12:48.226538 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:12:48.226548 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 12 18:12:48.226558 kernel: raid6: avx2x4 gen() 29661 MB/s Dec 12 18:12:48.226570 kernel: raid6: avx2x2 gen() 28977 MB/s Dec 12 18:12:48.226580 kernel: raid6: avx2x1 gen() 17646 MB/s Dec 12 18:12:48.226589 kernel: raid6: using algorithm avx2x4 gen() 29661 MB/s Dec 12 18:12:48.226599 kernel: raid6: .... xor() 3295 MB/s, rmw enabled Dec 12 18:12:48.226608 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:12:48.226619 kernel: xor: automatically using best checksumming function avx Dec 12 18:12:48.226629 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:12:48.226639 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (167) Dec 12 18:12:48.226648 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 12 18:12:48.226657 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:12:48.226667 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:12:48.226676 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:12:48.226688 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:12:48.226697 kernel: loop: module loaded Dec 12 18:12:48.226706 kernel: loop0: detected capacity change from 0 to 100136 Dec 12 18:12:48.226716 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:12:48.226727 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:12:48.226738 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:12:48.226751 systemd[1]: Detected virtualization kvm. Dec 12 18:12:48.226761 systemd[1]: Detected architecture x86-64. Dec 12 18:12:48.226770 systemd[1]: Running in initrd. Dec 12 18:12:48.226779 systemd[1]: No hostname configured, using default hostname. Dec 12 18:12:48.226789 systemd[1]: Hostname set to . Dec 12 18:12:48.226799 systemd[1]: Initializing machine ID from random generator. Dec 12 18:12:48.226811 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:12:48.226820 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:12:48.226830 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:12:48.226840 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:12:48.226851 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:12:48.226861 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:12:48.226873 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:12:48.226886 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:12:48.226895 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:12:48.226905 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:12:48.226915 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:12:48.226925 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:12:48.226937 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:12:48.226946 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:12:48.226956 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:12:48.226966 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:12:48.226976 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:12:48.226985 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:12:48.226995 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:12:48.227007 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:12:48.227017 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:12:48.227027 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:12:48.227037 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:12:48.227046 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:12:48.227056 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:12:48.227066 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:12:48.227078 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:12:48.227088 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:12:48.227098 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:12:48.227108 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:12:48.227118 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:12:48.227128 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:12:48.227141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:12:48.227151 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:12:48.227161 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:12:48.227171 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:12:48.227183 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:12:48.227216 systemd-journald[304]: Collecting audit messages is enabled. Dec 12 18:12:48.227238 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:12:48.227251 systemd-journald[304]: Journal started Dec 12 18:12:48.227271 systemd-journald[304]: Runtime Journal (/run/log/journal/6760fd39c2d949e892b37d3efb84cd6f) is 8M, max 78.1M, 70.1M free. Dec 12 18:12:48.230702 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:12:48.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.235090 kernel: audit: type=1130 audit(1765563168.232:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.245557 kernel: Bridge firewalling registered Dec 12 18:12:48.245889 systemd-modules-load[305]: Inserted module 'br_netfilter' Dec 12 18:12:48.246652 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:12:48.338653 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:12:48.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.345012 systemd-tmpfiles[319]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:12:48.349213 kernel: audit: type=1130 audit(1765563168.339:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.348309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:12:48.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.351942 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:12:48.367291 kernel: audit: type=1130 audit(1765563168.350:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.367311 kernel: audit: type=1130 audit(1765563168.358:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.359335 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:12:48.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.372634 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:12:48.378435 kernel: audit: type=1130 audit(1765563168.368:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.382627 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:12:48.386747 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:12:48.400763 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:12:48.414071 kernel: audit: type=1130 audit(1765563168.401:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.405030 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:12:48.416697 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:12:48.427346 kernel: audit: type=1130 audit(1765563168.416:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.427578 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:12:48.437015 kernel: audit: type=1130 audit(1765563168.428:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.437000 audit: BPF prog-id=6 op=LOAD Dec 12 18:12:48.441642 kernel: audit: type=1334 audit(1765563168.437:10): prog-id=6 op=LOAD Dec 12 18:12:48.441829 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:12:48.448747 dracut-cmdline[339]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 18:12:48.503571 systemd-resolved[350]: Positive Trust Anchors: Dec 12 18:12:48.504751 systemd-resolved[350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:12:48.504759 systemd-resolved[350]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 18:12:48.504795 systemd-resolved[350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:12:48.539196 systemd-resolved[350]: Defaulting to hostname 'linux'. Dec 12 18:12:48.541186 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:12:48.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.543161 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:12:48.562626 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:12:48.578527 kernel: iscsi: registered transport (tcp) Dec 12 18:12:48.604298 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:12:48.604348 kernel: QLogic iSCSI HBA Driver Dec 12 18:12:48.633338 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:12:48.683192 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:12:48.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.686495 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:12:48.757651 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:12:48.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.760819 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:12:48.764636 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:12:48.810444 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:12:48.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.812000 audit: BPF prog-id=7 op=LOAD Dec 12 18:12:48.812000 audit: BPF prog-id=8 op=LOAD Dec 12 18:12:48.814083 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:12:48.843627 systemd-udevd[586]: Using default interface naming scheme 'v257'. Dec 12 18:12:48.862473 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:12:48.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.867542 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:12:48.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.898025 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:12:48.901000 audit: BPF prog-id=9 op=LOAD Dec 12 18:12:48.903720 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:12:48.906568 dracut-pre-trigger[653]: rd.md=0: removing MD RAID activation Dec 12 18:12:48.943037 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:12:48.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.946911 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:12:48.964963 systemd-networkd[689]: lo: Link UP Dec 12 18:12:48.964973 systemd-networkd[689]: lo: Gained carrier Dec 12 18:12:48.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:48.967117 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:12:48.968214 systemd[1]: Reached target network.target - Network. Dec 12 18:12:49.053412 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:12:49.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:49.059202 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:12:49.188525 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:12:49.189101 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 12 18:12:49.201474 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:12:49.344891 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 12 18:12:49.388491 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 12 18:12:49.400489 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:12:49.405756 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:12:49.417364 kernel: AES CTR mode by8 optimization enabled Dec 12 18:12:49.421319 disk-uuid[759]: Primary Header is updated. Dec 12 18:12:49.421319 disk-uuid[759]: Secondary Entries is updated. Dec 12 18:12:49.421319 disk-uuid[759]: Secondary Header is updated. Dec 12 18:12:49.435732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:12:49.435851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:12:49.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:49.439564 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:12:49.455366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:12:49.461187 systemd-networkd[689]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:12:49.461198 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:12:49.464838 systemd-networkd[689]: eth0: Link UP Dec 12 18:12:49.465087 systemd-networkd[689]: eth0: Gained carrier Dec 12 18:12:49.465097 systemd-networkd[689]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:12:49.631999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:12:49.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:49.641689 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:12:49.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:49.643068 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:12:49.644192 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:12:49.645903 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:12:49.648673 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:12:49.673011 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:12:49.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:50.239609 systemd-networkd[689]: eth0: DHCPv4 address 172.234.28.21/24, gateway 172.234.28.1 acquired from 23.213.15.243 Dec 12 18:12:50.504670 disk-uuid[768]: Warning: The kernel is still using the old partition table. Dec 12 18:12:50.504670 disk-uuid[768]: The new table will be used at the next reboot or after you Dec 12 18:12:50.504670 disk-uuid[768]: run partprobe(8) or kpartx(8) Dec 12 18:12:50.504670 disk-uuid[768]: The operation has completed successfully. Dec 12 18:12:50.515732 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:12:50.533390 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 12 18:12:50.533414 kernel: audit: type=1130 audit(1765563170.516:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:50.533436 kernel: audit: type=1131 audit(1765563170.516:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:50.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:50.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:50.515873 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:12:50.517832 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:12:50.568573 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (856) Dec 12 18:12:50.568614 kernel: BTRFS info (device sda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:12:50.572691 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:12:50.584414 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:12:50.584444 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:12:50.584459 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:12:50.594471 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:12:50.603074 kernel: BTRFS info (device sda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:12:50.603095 kernel: audit: type=1130 audit(1765563170.594:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:50.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:50.597645 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:12:50.736097 ignition[875]: Ignition 2.22.0 Dec 12 18:12:50.736110 ignition[875]: Stage: fetch-offline Dec 12 18:12:50.736152 ignition[875]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:12:50.736164 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:12:50.736261 ignition[875]: parsed url from cmdline: "" Dec 12 18:12:50.736266 ignition[875]: no config URL provided Dec 12 18:12:50.736272 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:12:50.749938 kernel: audit: type=1130 audit(1765563170.741:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:50.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:50.740318 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:12:50.736283 ignition[875]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:12:50.742929 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:12:50.736289 ignition[875]: failed to fetch config: resource requires networking Dec 12 18:12:50.748708 systemd-networkd[689]: eth0: Gained IPv6LL Dec 12 18:12:50.737547 ignition[875]: Ignition finished successfully Dec 12 18:12:50.783154 ignition[881]: Ignition 2.22.0 Dec 12 18:12:50.783718 ignition[881]: Stage: fetch Dec 12 18:12:50.783879 ignition[881]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:12:50.783889 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:12:50.783974 ignition[881]: parsed url from cmdline: "" Dec 12 18:12:50.783978 ignition[881]: no config URL provided Dec 12 18:12:50.783984 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:12:50.783993 ignition[881]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:12:50.784023 ignition[881]: PUT http://169.254.169.254/v1/token: attempt #1 Dec 12 18:12:50.879590 ignition[881]: PUT result: OK Dec 12 18:12:50.879651 ignition[881]: GET http://169.254.169.254/v1/user-data: attempt #1 Dec 12 18:12:50.994041 ignition[881]: GET result: OK Dec 12 18:12:50.994223 ignition[881]: parsing config with SHA512: 6295ce76c31c3bc216207041b964d323278fbc21dfc1ca7264fa646b414c49613ae60f734c6962f5b1ef36ce3fba047237df82a022c0aad866c31b24deeccb55 Dec 12 18:12:51.000872 unknown[881]: fetched base config from "system" Dec 12 18:12:51.000883 unknown[881]: fetched base config from "system" Dec 12 18:12:51.001167 ignition[881]: fetch: fetch complete Dec 12 18:12:51.000890 unknown[881]: fetched user config from "akamai" Dec 12 18:12:51.001172 ignition[881]: fetch: fetch passed Dec 12 18:12:51.013854 kernel: audit: type=1130 audit(1765563171.005:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.005118 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:12:51.001214 ignition[881]: Ignition finished successfully Dec 12 18:12:51.006974 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:12:51.040235 ignition[888]: Ignition 2.22.0 Dec 12 18:12:51.040252 ignition[888]: Stage: kargs Dec 12 18:12:51.040378 ignition[888]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:12:51.051918 kernel: audit: type=1130 audit(1765563171.043:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.042993 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:12:51.040388 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:12:51.046670 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:12:51.041058 ignition[888]: kargs: kargs passed Dec 12 18:12:51.041104 ignition[888]: Ignition finished successfully Dec 12 18:12:51.083865 ignition[895]: Ignition 2.22.0 Dec 12 18:12:51.083881 ignition[895]: Stage: disks Dec 12 18:12:51.084034 ignition[895]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:12:51.084045 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:12:51.084955 ignition[895]: disks: disks passed Dec 12 18:12:51.084999 ignition[895]: Ignition finished successfully Dec 12 18:12:51.088147 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:12:51.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.090658 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:12:51.100034 kernel: audit: type=1130 audit(1765563171.089:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.099079 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:12:51.100804 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:12:51.102697 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:12:51.104426 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:12:51.107060 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:12:51.158489 systemd-fsck[905]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 12 18:12:51.162096 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:12:51.177644 kernel: audit: type=1130 audit(1765563171.162:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.171656 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:12:51.299554 kernel: EXT4-fs (sda9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 12 18:12:51.300700 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:12:51.302021 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:12:51.304791 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:12:51.308588 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:12:51.311031 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:12:51.311842 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:12:51.311871 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:12:51.318464 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:12:51.321668 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:12:51.330527 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (913) Dec 12 18:12:51.330561 kernel: BTRFS info (device sda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:12:51.336934 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:12:51.347322 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:12:51.347352 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:12:51.347367 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:12:51.349281 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:12:51.396366 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:12:51.401453 initrd-setup-root[944]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:12:51.406975 initrd-setup-root[951]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:12:51.411423 initrd-setup-root[958]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:12:51.528146 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:12:51.537082 kernel: audit: type=1130 audit(1765563171.528:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.531596 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:12:51.539645 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:12:51.554247 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:12:51.559338 kernel: BTRFS info (device sda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:12:51.577526 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:12:51.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.587560 kernel: audit: type=1130 audit(1765563171.578:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.593976 ignition[1026]: INFO : Ignition 2.22.0 Dec 12 18:12:51.593976 ignition[1026]: INFO : Stage: mount Dec 12 18:12:51.595654 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:12:51.595654 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:12:51.595654 ignition[1026]: INFO : mount: mount passed Dec 12 18:12:51.595654 ignition[1026]: INFO : Ignition finished successfully Dec 12 18:12:51.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:51.597547 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:12:51.601604 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:12:51.639715 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:12:51.661542 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1037) Dec 12 18:12:51.665845 kernel: BTRFS info (device sda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 18:12:51.665875 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:12:51.673767 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:12:51.673837 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:12:51.677776 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:12:51.679988 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:12:51.713609 ignition[1053]: INFO : Ignition 2.22.0 Dec 12 18:12:51.713609 ignition[1053]: INFO : Stage: files Dec 12 18:12:51.715499 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:12:51.715499 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:12:51.715499 ignition[1053]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:12:51.715499 ignition[1053]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:12:51.715499 ignition[1053]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:12:51.743262 ignition[1053]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:12:51.743262 ignition[1053]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:12:51.743262 ignition[1053]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:12:51.743262 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:12:51.743262 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 12 18:12:51.722038 unknown[1053]: wrote ssh authorized keys file for user: core Dec 12 18:12:51.826188 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:12:51.902633 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:12:51.904468 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:12:51.904468 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:12:51.904468 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:12:51.904468 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:12:51.904468 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:12:51.904468 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:12:51.904468 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:12:51.904468 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:12:51.913382 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:12:51.913382 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:12:51.913382 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:12:51.913382 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:12:51.913382 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:12:51.913382 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 12 18:12:52.314758 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:12:52.771839 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:12:52.771839 ignition[1053]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:12:52.777706 ignition[1053]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:12:52.777706 ignition[1053]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:12:52.777706 ignition[1053]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:12:52.777706 ignition[1053]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 18:12:52.777706 ignition[1053]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:12:52.777706 ignition[1053]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:12:52.777706 ignition[1053]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 18:12:52.777706 ignition[1053]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:12:52.777706 ignition[1053]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:12:52.777706 ignition[1053]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:12:52.777706 ignition[1053]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:12:52.777706 ignition[1053]: INFO : files: files passed Dec 12 18:12:52.777706 ignition[1053]: INFO : Ignition finished successfully Dec 12 18:12:52.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:52.778311 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:12:52.783687 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:12:52.793042 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:12:52.799624 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:12:52.802722 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:12:52.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:52.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:52.817296 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:12:52.817296 initrd-setup-root-after-ignition[1085]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:12:52.820433 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:12:52.822308 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:12:52.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:52.823867 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:12:52.826079 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:12:52.882316 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:12:52.882451 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:12:52.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:52.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:52.884713 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:12:52.886173 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:12:52.888202 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:12:52.889112 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:12:52.934168 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:12:52.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:52.937954 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:12:52.960551 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:12:52.960794 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:12:52.961883 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:12:52.963653 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:12:52.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:52.965415 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:12:52.965604 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:12:52.967882 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:12:52.969252 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:12:52.970759 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:12:52.972337 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:12:52.973913 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:12:52.975573 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:12:52.977234 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:12:52.979044 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:12:52.981111 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:12:52.982899 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:12:52.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:52.984712 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:12:52.986484 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:12:52.986659 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:12:52.988753 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:12:53.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:52.990071 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:12:53.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:52.991599 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:12:53.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:52.991718 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:12:53.015229 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:12:53.015562 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:12:53.017758 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:12:53.017951 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:12:53.019049 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:12:53.019219 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:12:53.022597 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:12:53.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.025669 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:12:53.029979 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:12:53.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.030181 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:12:53.032767 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:12:53.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.032920 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:12:53.035942 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:12:53.036081 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:12:53.050834 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:12:53.050963 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:12:53.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.068227 ignition[1109]: INFO : Ignition 2.22.0 Dec 12 18:12:53.070615 ignition[1109]: INFO : Stage: umount Dec 12 18:12:53.070615 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:12:53.070615 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:12:53.070615 ignition[1109]: INFO : umount: umount passed Dec 12 18:12:53.070151 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:12:53.078003 ignition[1109]: INFO : Ignition finished successfully Dec 12 18:12:53.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.077632 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:12:53.077802 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:12:53.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.079380 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:12:53.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.079481 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:12:53.083082 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:12:53.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.083136 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:12:53.085312 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:12:53.085369 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:12:53.086636 systemd[1]: Stopped target network.target - Network. Dec 12 18:12:53.087986 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:12:53.088047 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:12:53.090871 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:12:53.091781 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:12:53.097725 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:12:53.098858 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:12:53.100227 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:12:53.101846 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:12:53.101898 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:12:53.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.103462 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:12:53.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.103526 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:12:53.104868 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 18:12:53.104907 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:12:53.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.106280 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:12:53.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.106337 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:12:53.107671 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:12:53.107723 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:12:53.109193 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:12:53.110669 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:12:53.113841 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:12:53.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.123000 audit: BPF prog-id=9 op=UNLOAD Dec 12 18:12:53.113960 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:12:53.115237 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:12:53.115329 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:12:53.120762 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:12:53.120896 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:12:53.124349 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:12:53.126348 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:12:53.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.126390 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:12:53.128732 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:12:53.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.130283 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:12:53.142000 audit: BPF prog-id=6 op=UNLOAD Dec 12 18:12:53.130346 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:12:53.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.133230 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:12:53.137560 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:12:53.137676 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:12:53.142205 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:12:53.142293 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:12:53.145239 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:12:53.145295 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:12:53.158129 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:12:53.158337 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:12:53.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.161449 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:12:53.161497 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:12:53.162842 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:12:53.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.162879 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:12:53.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.164409 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:12:53.164462 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:12:53.166051 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:12:53.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.166100 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:12:53.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.167368 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:12:53.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.167423 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:12:53.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.169893 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:12:53.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.171990 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:12:53.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.172047 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:12:53.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:53.174642 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:12:53.174695 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:12:53.176621 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 18:12:53.176674 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:12:53.178008 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:12:53.178059 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:12:53.179481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:12:53.179561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:12:53.182043 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:12:53.182150 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:12:53.204678 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:12:53.204778 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:12:53.206402 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:12:53.209562 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:12:53.223924 systemd[1]: Switching root. Dec 12 18:12:53.257480 systemd-journald[304]: Journal stopped Dec 12 18:12:54.488479 systemd-journald[304]: Received SIGTERM from PID 1 (systemd). Dec 12 18:12:54.488652 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:12:54.488672 kernel: SELinux: policy capability open_perms=1 Dec 12 18:12:54.488684 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:12:54.488694 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:12:54.488708 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:12:54.488719 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:12:54.488729 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:12:54.488739 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:12:54.488750 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:12:54.488768 systemd[1]: Successfully loaded SELinux policy in 74.172ms. Dec 12 18:12:54.488789 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.562ms. Dec 12 18:12:54.488803 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:12:54.488865 systemd[1]: Detected virtualization kvm. Dec 12 18:12:54.488885 systemd[1]: Detected architecture x86-64. Dec 12 18:12:54.488896 systemd[1]: Detected first boot. Dec 12 18:12:54.488908 systemd[1]: Initializing machine ID from random generator. Dec 12 18:12:54.488919 kernel: Guest personality initialized and is inactive Dec 12 18:12:54.488929 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:12:54.488965 kernel: Initialized host personality Dec 12 18:12:54.488980 zram_generator::config[1158]: No configuration found. Dec 12 18:12:54.488992 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:12:54.489003 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:12:54.489014 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:12:54.489027 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:12:54.489061 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:12:54.489078 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:12:54.489092 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:12:54.489103 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:12:54.489114 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:12:54.489126 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:12:54.489137 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:12:54.489180 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:12:54.489192 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:12:54.489204 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:12:54.489215 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:12:54.489227 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:12:54.489238 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:12:54.489249 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:12:54.489263 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:12:54.489277 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:12:54.489289 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:12:54.489300 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:12:54.489312 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:12:54.489323 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:12:54.489337 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:12:54.489348 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:12:54.489360 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:12:54.489371 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:12:54.489382 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 18:12:54.489393 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:12:54.489404 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:12:54.489436 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:12:54.489463 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:12:54.489474 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:12:54.489486 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 18:12:54.489500 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 18:12:54.489524 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:12:54.489536 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 18:12:54.489547 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 18:12:54.489559 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:12:54.489570 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:12:54.489584 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:12:54.489595 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:12:54.489607 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:12:54.489619 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:12:54.489630 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:12:54.489642 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:12:54.489653 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:12:54.489667 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:12:54.489679 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:12:54.489691 systemd[1]: Reached target machines.target - Containers. Dec 12 18:12:54.489710 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:12:54.489724 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:12:54.489736 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:12:54.489747 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:12:54.489761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:12:54.489773 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:12:54.489784 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:12:54.489795 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:12:54.489807 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:12:54.489819 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:12:54.489833 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:12:54.489844 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:12:54.489855 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:12:54.489866 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:12:54.489878 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:12:54.489890 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:12:54.489901 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:12:54.489915 kernel: fuse: init (API version 7.41) Dec 12 18:12:54.489926 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:12:54.489938 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:12:54.489949 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:12:54.489961 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:12:54.489973 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:12:54.489986 kernel: ACPI: bus type drm_connector registered Dec 12 18:12:54.489997 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:12:54.490008 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:12:54.490019 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:12:54.490031 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:12:54.490042 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:12:54.490053 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:12:54.490066 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:12:54.490078 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:12:54.490089 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:12:54.490100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:12:54.490111 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:12:54.490145 systemd-journald[1235]: Collecting audit messages is enabled. Dec 12 18:12:54.490169 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:12:54.490181 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:12:54.490193 systemd-journald[1235]: Journal started Dec 12 18:12:54.490216 systemd-journald[1235]: Runtime Journal (/run/log/journal/55d0d97b32d849c2bfbd49ef208a495d) is 8M, max 78.1M, 70.1M free. Dec 12 18:12:54.163000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 18:12:54.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.361000 audit: BPF prog-id=14 op=UNLOAD Dec 12 18:12:54.361000 audit: BPF prog-id=13 op=UNLOAD Dec 12 18:12:54.362000 audit: BPF prog-id=15 op=LOAD Dec 12 18:12:54.362000 audit: BPF prog-id=16 op=LOAD Dec 12 18:12:54.362000 audit: BPF prog-id=17 op=LOAD Dec 12 18:12:54.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.481000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 18:12:54.481000 audit[1235]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc0e5d1ae0 a2=4000 a3=0 items=0 ppid=1 pid=1235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:12:54.481000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 18:12:54.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.029693 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:12:54.050196 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 12 18:12:54.050972 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:12:54.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.496530 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:12:54.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.502673 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:12:54.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.504208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:12:54.504744 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:12:54.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.506001 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:12:54.506313 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:12:54.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.507985 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:12:54.508273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:12:54.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.509634 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:12:54.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.510948 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:12:54.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.513208 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:12:54.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.514745 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:12:54.531176 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:12:54.532962 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 18:12:54.533856 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:12:54.533949 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:12:54.535939 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:12:54.537774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:12:54.537964 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:12:54.541664 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:12:54.544771 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:12:54.545699 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:12:54.547720 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:12:54.549593 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:12:54.551768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:12:54.563841 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:12:54.568735 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:12:54.576984 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:12:54.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.577899 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:12:54.580100 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:12:54.589015 systemd-journald[1235]: Time spent on flushing to /var/log/journal/55d0d97b32d849c2bfbd49ef208a495d is 52.027ms for 1124 entries. Dec 12 18:12:54.589015 systemd-journald[1235]: System Journal (/var/log/journal/55d0d97b32d849c2bfbd49ef208a495d) is 8M, max 588.1M, 580.1M free. Dec 12 18:12:54.660907 systemd-journald[1235]: Received client request to flush runtime journal. Dec 12 18:12:54.660956 kernel: loop1: detected capacity change from 0 to 119256 Dec 12 18:12:54.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.622091 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:12:54.655686 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:12:54.671900 kernel: loop2: detected capacity change from 0 to 224512 Dec 12 18:12:54.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.664584 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:12:54.669303 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:12:54.673878 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Dec 12 18:12:54.673902 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Dec 12 18:12:54.681829 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:12:54.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.687734 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:12:54.711535 kernel: loop3: detected capacity change from 0 to 111544 Dec 12 18:12:54.732840 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:12:54.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.735000 audit: BPF prog-id=18 op=LOAD Dec 12 18:12:54.735000 audit: BPF prog-id=19 op=LOAD Dec 12 18:12:54.735000 audit: BPF prog-id=20 op=LOAD Dec 12 18:12:54.737196 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 18:12:54.739000 audit: BPF prog-id=21 op=LOAD Dec 12 18:12:54.740804 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:12:54.750566 kernel: loop4: detected capacity change from 0 to 8 Dec 12 18:12:54.744816 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:12:54.760000 audit: BPF prog-id=22 op=LOAD Dec 12 18:12:54.761000 audit: BPF prog-id=23 op=LOAD Dec 12 18:12:54.761000 audit: BPF prog-id=24 op=LOAD Dec 12 18:12:54.763055 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 18:12:54.764000 audit: BPF prog-id=25 op=LOAD Dec 12 18:12:54.765000 audit: BPF prog-id=26 op=LOAD Dec 12 18:12:54.767000 audit: BPF prog-id=27 op=LOAD Dec 12 18:12:54.768757 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:12:54.780525 kernel: loop5: detected capacity change from 0 to 119256 Dec 12 18:12:54.811541 kernel: loop6: detected capacity change from 0 to 224512 Dec 12 18:12:54.811760 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Dec 12 18:12:54.812333 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Dec 12 18:12:54.821910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:12:54.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.831597 kernel: loop7: detected capacity change from 0 to 111544 Dec 12 18:12:54.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.840664 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:12:54.844361 systemd-nsresourced[1301]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 18:12:54.853157 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 18:12:54.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:54.860579 kernel: loop1: detected capacity change from 0 to 8 Dec 12 18:12:54.869799 (sd-merge)[1304]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Dec 12 18:12:54.881476 (sd-merge)[1304]: Merged extensions into '/usr'. Dec 12 18:12:54.892650 systemd[1]: Reload requested from client PID 1277 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:12:54.892671 systemd[1]: Reloading... Dec 12 18:12:55.043790 zram_generator::config[1349]: No configuration found. Dec 12 18:12:55.045922 systemd-oomd[1297]: No swap; memory pressure usage will be degraded Dec 12 18:12:55.083215 systemd-resolved[1298]: Positive Trust Anchors: Dec 12 18:12:55.084760 systemd-resolved[1298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:12:55.084827 systemd-resolved[1298]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 18:12:55.084897 systemd-resolved[1298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:12:55.090462 systemd-resolved[1298]: Defaulting to hostname 'linux'. Dec 12 18:12:55.266622 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:12:55.266947 systemd[1]: Reloading finished in 373 ms. Dec 12 18:12:55.300910 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 18:12:55.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.302127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:12:55.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.303408 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:12:55.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.304803 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:12:55.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.310085 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:12:55.312570 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:12:55.314532 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:12:55.322437 systemd[1]: Starting ensure-sysext.service... Dec 12 18:12:55.327644 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:12:55.328000 audit: BPF prog-id=8 op=UNLOAD Dec 12 18:12:55.328000 audit: BPF prog-id=7 op=UNLOAD Dec 12 18:12:55.328000 audit: BPF prog-id=28 op=LOAD Dec 12 18:12:55.328000 audit: BPF prog-id=29 op=LOAD Dec 12 18:12:55.331605 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:12:55.335000 audit: BPF prog-id=30 op=LOAD Dec 12 18:12:55.335000 audit: BPF prog-id=15 op=UNLOAD Dec 12 18:12:55.335000 audit: BPF prog-id=31 op=LOAD Dec 12 18:12:55.335000 audit: BPF prog-id=32 op=LOAD Dec 12 18:12:55.335000 audit: BPF prog-id=16 op=UNLOAD Dec 12 18:12:55.335000 audit: BPF prog-id=17 op=UNLOAD Dec 12 18:12:55.341000 audit: BPF prog-id=33 op=LOAD Dec 12 18:12:55.341000 audit: BPF prog-id=18 op=UNLOAD Dec 12 18:12:55.341000 audit: BPF prog-id=34 op=LOAD Dec 12 18:12:55.341000 audit: BPF prog-id=35 op=LOAD Dec 12 18:12:55.341000 audit: BPF prog-id=19 op=UNLOAD Dec 12 18:12:55.341000 audit: BPF prog-id=20 op=UNLOAD Dec 12 18:12:55.342000 audit: BPF prog-id=36 op=LOAD Dec 12 18:12:55.342000 audit: BPF prog-id=21 op=UNLOAD Dec 12 18:12:55.343000 audit: BPF prog-id=37 op=LOAD Dec 12 18:12:55.343000 audit: BPF prog-id=25 op=UNLOAD Dec 12 18:12:55.345000 audit: BPF prog-id=38 op=LOAD Dec 12 18:12:55.345000 audit: BPF prog-id=39 op=LOAD Dec 12 18:12:55.345000 audit: BPF prog-id=26 op=UNLOAD Dec 12 18:12:55.345000 audit: BPF prog-id=27 op=UNLOAD Dec 12 18:12:55.346000 audit: BPF prog-id=40 op=LOAD Dec 12 18:12:55.346000 audit: BPF prog-id=22 op=UNLOAD Dec 12 18:12:55.346000 audit: BPF prog-id=41 op=LOAD Dec 12 18:12:55.346000 audit: BPF prog-id=42 op=LOAD Dec 12 18:12:55.346000 audit: BPF prog-id=23 op=UNLOAD Dec 12 18:12:55.346000 audit: BPF prog-id=24 op=UNLOAD Dec 12 18:12:55.350459 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:12:55.351914 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:12:55.358745 systemd[1]: Reload requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:12:55.358770 systemd[1]: Reloading... Dec 12 18:12:55.366426 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:12:55.366470 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:12:55.366813 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:12:55.368151 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Dec 12 18:12:55.368223 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Dec 12 18:12:55.376028 systemd-udevd[1396]: Using default interface naming scheme 'v257'. Dec 12 18:12:55.380230 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:12:55.380248 systemd-tmpfiles[1395]: Skipping /boot Dec 12 18:12:55.403197 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:12:55.403263 systemd-tmpfiles[1395]: Skipping /boot Dec 12 18:12:55.475563 zram_generator::config[1438]: No configuration found. Dec 12 18:12:55.645540 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:12:55.658551 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:12:55.693943 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:12:55.694290 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:12:55.711549 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:12:55.754358 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:12:55.754491 systemd[1]: Reloading finished in 395 ms. Dec 12 18:12:55.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.764975 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:12:55.767662 kernel: kauditd_printk_skb: 146 callbacks suppressed Dec 12 18:12:55.767693 kernel: audit: type=1130 audit(1765563175.765:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.773636 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:12:55.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.788664 kernel: audit: type=1130 audit(1765563175.775:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.788716 kernel: audit: type=1334 audit(1765563175.779:183): prog-id=43 op=LOAD Dec 12 18:12:55.779000 audit: BPF prog-id=43 op=LOAD Dec 12 18:12:55.779000 audit: BPF prog-id=33 op=UNLOAD Dec 12 18:12:55.793632 kernel: audit: type=1334 audit(1765563175.779:184): prog-id=33 op=UNLOAD Dec 12 18:12:55.779000 audit: BPF prog-id=44 op=LOAD Dec 12 18:12:55.796571 kernel: audit: type=1334 audit(1765563175.779:185): prog-id=44 op=LOAD Dec 12 18:12:55.796600 kernel: audit: type=1334 audit(1765563175.779:186): prog-id=45 op=LOAD Dec 12 18:12:55.779000 audit: BPF prog-id=45 op=LOAD Dec 12 18:12:55.779000 audit: BPF prog-id=34 op=UNLOAD Dec 12 18:12:55.802529 kernel: audit: type=1334 audit(1765563175.779:187): prog-id=34 op=UNLOAD Dec 12 18:12:55.802558 kernel: audit: type=1334 audit(1765563175.779:188): prog-id=35 op=UNLOAD Dec 12 18:12:55.779000 audit: BPF prog-id=35 op=UNLOAD Dec 12 18:12:55.805543 kernel: audit: type=1334 audit(1765563175.780:189): prog-id=46 op=LOAD Dec 12 18:12:55.780000 audit: BPF prog-id=46 op=LOAD Dec 12 18:12:55.809555 kernel: audit: type=1334 audit(1765563175.780:190): prog-id=40 op=UNLOAD Dec 12 18:12:55.780000 audit: BPF prog-id=40 op=UNLOAD Dec 12 18:12:55.780000 audit: BPF prog-id=47 op=LOAD Dec 12 18:12:55.780000 audit: BPF prog-id=48 op=LOAD Dec 12 18:12:55.780000 audit: BPF prog-id=41 op=UNLOAD Dec 12 18:12:55.780000 audit: BPF prog-id=42 op=UNLOAD Dec 12 18:12:55.781000 audit: BPF prog-id=49 op=LOAD Dec 12 18:12:55.781000 audit: BPF prog-id=36 op=UNLOAD Dec 12 18:12:55.783000 audit: BPF prog-id=50 op=LOAD Dec 12 18:12:55.783000 audit: BPF prog-id=37 op=UNLOAD Dec 12 18:12:55.783000 audit: BPF prog-id=51 op=LOAD Dec 12 18:12:55.783000 audit: BPF prog-id=52 op=LOAD Dec 12 18:12:55.784000 audit: BPF prog-id=38 op=UNLOAD Dec 12 18:12:55.784000 audit: BPF prog-id=39 op=UNLOAD Dec 12 18:12:55.786000 audit: BPF prog-id=53 op=LOAD Dec 12 18:12:55.786000 audit: BPF prog-id=54 op=LOAD Dec 12 18:12:55.786000 audit: BPF prog-id=28 op=UNLOAD Dec 12 18:12:55.786000 audit: BPF prog-id=29 op=UNLOAD Dec 12 18:12:55.787000 audit: BPF prog-id=55 op=LOAD Dec 12 18:12:55.787000 audit: BPF prog-id=30 op=UNLOAD Dec 12 18:12:55.787000 audit: BPF prog-id=56 op=LOAD Dec 12 18:12:55.787000 audit: BPF prog-id=57 op=LOAD Dec 12 18:12:55.787000 audit: BPF prog-id=31 op=UNLOAD Dec 12 18:12:55.787000 audit: BPF prog-id=32 op=UNLOAD Dec 12 18:12:55.848919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:12:55.850950 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:12:55.854303 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:12:55.856387 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:12:55.857304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:12:55.860775 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:12:55.862714 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:12:55.871235 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:12:55.872272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:12:55.872456 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:12:55.874259 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:12:55.877653 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:12:55.879686 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:12:55.881738 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:12:55.899000 audit: BPF prog-id=58 op=LOAD Dec 12 18:12:55.902077 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:12:55.907817 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:12:55.908960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:12:55.916104 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:12:55.916277 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:12:55.916463 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:12:55.916663 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:12:55.916749 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:12:55.916829 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:12:55.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.922390 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:12:55.922720 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:12:55.927348 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:12:55.929899 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:12:55.937266 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:12:55.939131 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:12:55.939318 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 18:12:55.939409 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:12:55.940139 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:12:55.940475 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:12:55.953200 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:12:55.956075 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:12:55.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.958456 systemd[1]: Finished ensure-sysext.service. Dec 12 18:12:55.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.960300 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:12:55.960655 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:12:55.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.962184 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:12:55.963000 audit: BPF prog-id=59 op=LOAD Dec 12 18:12:55.966154 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:12:55.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:55.977274 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:12:55.986000 audit[1531]: SYSTEM_BOOT pid=1531 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 18:12:56.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:56.001999 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:12:56.011106 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:12:56.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:56.016307 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:12:56.025752 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:12:56.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:56.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:56.021956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:12:56.059972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:12:56.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:12:56.095039 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:12:56.096218 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:12:56.113000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 18:12:56.113000 audit[1565]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc4936f170 a2=420 a3=0 items=0 ppid=1516 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:12:56.113000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:12:56.115287 augenrules[1565]: No rules Dec 12 18:12:56.121148 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:12:56.121578 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:12:56.308213 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:12:56.328643 systemd-networkd[1530]: lo: Link UP Dec 12 18:12:56.328654 systemd-networkd[1530]: lo: Gained carrier Dec 12 18:12:56.335994 systemd-networkd[1530]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:12:56.336008 systemd-networkd[1530]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:12:56.344596 systemd-networkd[1530]: eth0: Link UP Dec 12 18:12:56.346421 systemd-networkd[1530]: eth0: Gained carrier Dec 12 18:12:56.346444 systemd-networkd[1530]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 18:12:56.424992 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:12:56.426306 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:12:56.433101 systemd[1]: Reached target network.target - Network. Dec 12 18:12:56.434918 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:12:56.443877 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:12:56.446278 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:12:56.471317 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:12:56.576357 ldconfig[1521]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:12:56.579880 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:12:56.582425 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:12:56.605897 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:12:56.606958 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:12:56.608015 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:12:56.608857 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:12:56.609807 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:12:56.610878 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:12:56.611906 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:12:56.612874 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 18:12:56.613748 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 18:12:56.614473 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:12:56.615246 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:12:56.615292 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:12:56.616328 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:12:56.618626 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:12:56.621769 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:12:56.624866 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:12:56.625888 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:12:56.626921 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:12:56.639673 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:12:56.641013 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:12:56.642690 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:12:56.644383 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:12:56.645309 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:12:56.646263 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:12:56.646299 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:12:56.647637 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:12:56.651656 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:12:56.666993 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:12:56.669395 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:12:56.674664 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:12:56.679706 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:12:56.680437 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:12:56.684723 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:12:56.689779 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:12:56.702710 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:12:56.708833 jq[1592]: false Dec 12 18:12:56.714170 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:12:56.721713 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:12:56.727194 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Refreshing passwd entry cache Dec 12 18:12:56.727202 oslogin_cache_refresh[1594]: Refreshing passwd entry cache Dec 12 18:12:56.732042 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Failure getting users, quitting Dec 12 18:12:56.732102 oslogin_cache_refresh[1594]: Failure getting users, quitting Dec 12 18:12:56.732169 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:12:56.732198 oslogin_cache_refresh[1594]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:12:56.732289 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Refreshing group entry cache Dec 12 18:12:56.732321 oslogin_cache_refresh[1594]: Refreshing group entry cache Dec 12 18:12:56.732914 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Failure getting groups, quitting Dec 12 18:12:56.732955 oslogin_cache_refresh[1594]: Failure getting groups, quitting Dec 12 18:12:56.733004 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:12:56.733032 oslogin_cache_refresh[1594]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:12:56.738644 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:12:56.739579 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:12:56.740461 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:12:56.742739 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:12:56.752900 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:12:56.758699 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:12:56.761325 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:12:56.762646 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:12:56.763040 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:12:56.763325 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:12:56.766766 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:12:56.767030 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:12:56.788868 update_engine[1606]: I20251212 18:12:56.788615 1606 main.cc:92] Flatcar Update Engine starting Dec 12 18:12:56.794828 coreos-metadata[1589]: Dec 12 18:12:56.794 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:12:56.804289 extend-filesystems[1593]: Found /dev/sda6 Dec 12 18:12:56.819018 extend-filesystems[1593]: Found /dev/sda9 Dec 12 18:12:56.819018 extend-filesystems[1593]: Checking size of /dev/sda9 Dec 12 18:12:56.816954 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:12:56.817292 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:12:56.828935 jq[1610]: true Dec 12 18:12:56.842858 extend-filesystems[1593]: Resized partition /dev/sda9 Dec 12 18:12:56.852561 tar[1617]: linux-amd64/LICENSE Dec 12 18:12:56.852561 tar[1617]: linux-amd64/helm Dec 12 18:12:56.870766 extend-filesystems[1645]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:12:56.888783 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Dec 12 18:12:56.888835 jq[1641]: true Dec 12 18:12:56.902314 dbus-daemon[1590]: [system] SELinux support is enabled Dec 12 18:12:56.902795 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:12:56.908619 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:12:56.908647 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:12:56.910181 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:12:56.910202 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:12:56.939527 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:12:56.945920 update_engine[1606]: I20251212 18:12:56.945739 1606 update_check_scheduler.cc:74] Next update check in 3m35s Dec 12 18:12:56.954765 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:12:57.024741 bash[1663]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:12:57.031471 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:12:57.037596 systemd[1]: Starting sshkeys.service... Dec 12 18:12:57.043945 systemd-logind[1603]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:12:57.043979 systemd-logind[1603]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:12:57.049212 systemd-logind[1603]: New seat seat0. Dec 12 18:12:57.055392 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:12:57.126644 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:12:57.133499 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:12:57.137575 systemd-networkd[1530]: eth0: DHCPv4 address 172.234.28.21/24, gateway 172.234.28.1 acquired from 23.213.15.243 Dec 12 18:12:57.138277 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Dec 12 18:12:57.138388 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Dec 12 18:12:57.161172 dbus-daemon[1590]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1530 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 18:12:57.170774 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 18:12:57.278447 containerd[1629]: time="2025-12-12T18:12:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:12:57.283722 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Dec 12 18:12:57.303874 containerd[1629]: time="2025-12-12T18:12:57.303822460Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 18:12:57.308529 extend-filesystems[1645]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 12 18:12:57.308529 extend-filesystems[1645]: old_desc_blocks = 1, new_desc_blocks = 10 Dec 12 18:12:57.308529 extend-filesystems[1645]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Dec 12 18:12:57.317672 extend-filesystems[1593]: Resized filesystem in /dev/sda9 Dec 12 18:12:57.309128 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:12:57.311194 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:12:57.348006 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 18:12:57.350021 dbus-daemon[1590]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 18:12:57.351179 coreos-metadata[1673]: Dec 12 18:12:57.349 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:12:57.353867 containerd[1629]: time="2025-12-12T18:12:57.352938740Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.47µs" Dec 12 18:12:57.353867 containerd[1629]: time="2025-12-12T18:12:57.353110920Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:12:57.353867 containerd[1629]: time="2025-12-12T18:12:57.353230250Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:12:57.353867 containerd[1629]: time="2025-12-12T18:12:57.353253580Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:12:57.354032 dbus-daemon[1590]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1674 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 18:12:57.354577 containerd[1629]: time="2025-12-12T18:12:57.354106370Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:12:57.354577 containerd[1629]: time="2025-12-12T18:12:57.354125980Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:12:57.354577 containerd[1629]: time="2025-12-12T18:12:57.354265440Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:12:57.354577 containerd[1629]: time="2025-12-12T18:12:57.354285440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:12:57.356285 containerd[1629]: time="2025-12-12T18:12:57.355031580Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:12:57.356285 containerd[1629]: time="2025-12-12T18:12:57.355563960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:12:57.356285 containerd[1629]: time="2025-12-12T18:12:57.355584640Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:12:57.356285 containerd[1629]: time="2025-12-12T18:12:57.355602350Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 18:12:57.356285 containerd[1629]: time="2025-12-12T18:12:57.355855040Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 18:12:57.356285 containerd[1629]: time="2025-12-12T18:12:57.355899760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:12:57.356285 containerd[1629]: time="2025-12-12T18:12:57.356022570Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:12:57.357227 containerd[1629]: time="2025-12-12T18:12:57.357195250Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:12:57.357263 containerd[1629]: time="2025-12-12T18:12:57.357239520Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:12:57.357263 containerd[1629]: time="2025-12-12T18:12:57.357251020Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:12:57.359043 containerd[1629]: time="2025-12-12T18:12:57.357478040Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:12:57.366147 containerd[1629]: time="2025-12-12T18:12:57.361907830Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:12:57.366147 containerd[1629]: time="2025-12-12T18:12:57.362547790Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:12:57.363290 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369605800Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369656190Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369783490Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369797740Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369808770Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369818730Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369828540Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369837120Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369847220Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369857770Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369866960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369876420Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369884160Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:12:57.370531 containerd[1629]: time="2025-12-12T18:12:57.369907230Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370025880Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370043420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370055530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370065020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370074500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370085000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370102040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370110970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370129420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370147970Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370158770Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370184340Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370221380Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:12:57.370802 containerd[1629]: time="2025-12-12T18:12:57.370231800Z" level=info msg="Start snapshots syncer" Dec 12 18:12:57.371050 containerd[1629]: time="2025-12-12T18:12:57.370823800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:12:57.371206 containerd[1629]: time="2025-12-12T18:12:57.371161500Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:12:57.371311 containerd[1629]: time="2025-12-12T18:12:57.371228710Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:12:57.371748 containerd[1629]: time="2025-12-12T18:12:57.371717920Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:12:57.371939 containerd[1629]: time="2025-12-12T18:12:57.371908200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:12:57.371966 containerd[1629]: time="2025-12-12T18:12:57.371942540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:12:57.371966 containerd[1629]: time="2025-12-12T18:12:57.371954240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:12:57.371966 containerd[1629]: time="2025-12-12T18:12:57.371963230Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:12:57.372019 containerd[1629]: time="2025-12-12T18:12:57.371974930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:12:57.372019 containerd[1629]: time="2025-12-12T18:12:57.371984380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:12:57.372019 containerd[1629]: time="2025-12-12T18:12:57.371993530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:12:57.372019 containerd[1629]: time="2025-12-12T18:12:57.372009950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:12:57.372094 containerd[1629]: time="2025-12-12T18:12:57.372023250Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:12:57.372453 containerd[1629]: time="2025-12-12T18:12:57.372425830Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:12:57.372453 containerd[1629]: time="2025-12-12T18:12:57.372449090Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:12:57.372553 containerd[1629]: time="2025-12-12T18:12:57.372457890Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:12:57.372578 containerd[1629]: time="2025-12-12T18:12:57.372555120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:12:57.372578 containerd[1629]: time="2025-12-12T18:12:57.372565660Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:12:57.372578 containerd[1629]: time="2025-12-12T18:12:57.372575330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:12:57.372645 containerd[1629]: time="2025-12-12T18:12:57.372584630Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:12:57.372645 containerd[1629]: time="2025-12-12T18:12:57.372596540Z" level=info msg="runtime interface created" Dec 12 18:12:57.372645 containerd[1629]: time="2025-12-12T18:12:57.372602250Z" level=info msg="created NRI interface" Dec 12 18:12:57.372645 containerd[1629]: time="2025-12-12T18:12:57.372610000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:12:57.372645 containerd[1629]: time="2025-12-12T18:12:57.372620000Z" level=info msg="Connect containerd service" Dec 12 18:12:57.372645 containerd[1629]: time="2025-12-12T18:12:57.372636600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:12:57.375391 containerd[1629]: time="2025-12-12T18:12:57.374203840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:12:57.429986 sshd_keygen[1631]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:12:57.439640 locksmithd[1664]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:12:57.467594 coreos-metadata[1673]: Dec 12 18:12:57.466 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Dec 12 18:12:57.494046 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:12:57.498196 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:12:57.518049 polkitd[1685]: Started polkitd version 126 Dec 12 18:12:57.524825 polkitd[1685]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 18:12:57.526388 polkitd[1685]: Loading rules from directory /run/polkit-1/rules.d Dec 12 18:12:57.526455 polkitd[1685]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:12:57.526754 polkitd[1685]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 18:12:57.526801 polkitd[1685]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:12:57.526868 polkitd[1685]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 18:12:57.527617 polkitd[1685]: Finished loading, compiling and executing 2 rules Dec 12 18:12:57.531204 dbus-daemon[1590]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 18:12:57.532016 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 18:12:57.533214 polkitd[1685]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 18:12:57.534550 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:12:57.534883 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:12:57.539955 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:12:57.565272 systemd-hostnamed[1674]: Hostname set to <172-234-28-21> (transient) Dec 12 18:12:57.566573 systemd-resolved[1298]: System hostname changed to '172-234-28-21'. Dec 12 18:12:57.570156 containerd[1629]: time="2025-12-12T18:12:57.570129260Z" level=info msg="Start subscribing containerd event" Dec 12 18:12:57.570725 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:12:57.570933 containerd[1629]: time="2025-12-12T18:12:57.570866200Z" level=info msg="Start recovering state" Dec 12 18:12:57.571795 containerd[1629]: time="2025-12-12T18:12:57.571213600Z" level=info msg="Start event monitor" Dec 12 18:12:57.571795 containerd[1629]: time="2025-12-12T18:12:57.571232490Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:12:57.571795 containerd[1629]: time="2025-12-12T18:12:57.571241070Z" level=info msg="Start streaming server" Dec 12 18:12:57.571795 containerd[1629]: time="2025-12-12T18:12:57.571249740Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:12:57.571795 containerd[1629]: time="2025-12-12T18:12:57.571256960Z" level=info msg="runtime interface starting up..." Dec 12 18:12:57.571795 containerd[1629]: time="2025-12-12T18:12:57.571262870Z" level=info msg="starting plugins..." Dec 12 18:12:57.571795 containerd[1629]: time="2025-12-12T18:12:57.571277090Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:12:57.572682 containerd[1629]: time="2025-12-12T18:12:57.572659770Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:12:57.573886 containerd[1629]: time="2025-12-12T18:12:57.573103570Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:12:57.575956 containerd[1629]: time="2025-12-12T18:12:57.575936740Z" level=info msg="containerd successfully booted in 0.299190s" Dec 12 18:12:57.577818 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:12:57.582760 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:12:57.584756 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:12:57.587068 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:12:57.605849 coreos-metadata[1673]: Dec 12 18:12:57.605 INFO Fetch successful Dec 12 18:12:57.626526 update-ssh-keys[1726]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:12:57.628330 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:12:57.633257 systemd[1]: Finished sshkeys.service. Dec 12 18:12:57.647805 tar[1617]: linux-amd64/README.md Dec 12 18:12:57.665373 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:12:57.805852 coreos-metadata[1589]: Dec 12 18:12:57.805 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:12:57.897269 coreos-metadata[1589]: Dec 12 18:12:57.897 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Dec 12 18:12:58.128870 coreos-metadata[1589]: Dec 12 18:12:58.128 INFO Fetch successful Dec 12 18:12:58.128870 coreos-metadata[1589]: Dec 12 18:12:58.128 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Dec 12 18:12:58.170764 systemd-networkd[1530]: eth0: Gained IPv6LL Dec 12 18:12:58.171418 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Dec 12 18:12:58.173870 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:12:58.175177 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:12:58.177816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:12:58.180710 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:12:58.214660 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:12:58.391213 coreos-metadata[1589]: Dec 12 18:12:58.391 INFO Fetch successful Dec 12 18:12:58.524189 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:12:58.525842 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:12:59.165493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:12:59.166838 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:12:59.168944 systemd[1]: Startup finished in 2.996s (kernel) + 5.591s (initrd) + 5.863s (userspace) = 14.451s. Dec 12 18:12:59.169956 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:12:59.423153 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Dec 12 18:12:59.728673 kubelet[1770]: E1212 18:12:59.728464 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:12:59.733755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:12:59.733991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:12:59.735753 systemd[1]: kubelet.service: Consumed 947ms CPU time, 265M memory peak. Dec 12 18:13:01.122994 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:13:01.124984 systemd[1]: Started sshd@0-172.234.28.21:22-139.178.89.65:56636.service - OpenSSH per-connection server daemon (139.178.89.65:56636). Dec 12 18:13:01.244551 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Dec 12 18:13:01.453251 sshd[1782]: Accepted publickey for core from 139.178.89.65 port 56636 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:13:01.455655 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:13:01.462626 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:13:01.464084 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:13:01.470489 systemd-logind[1603]: New session 1 of user core. Dec 12 18:13:01.488379 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:13:01.491590 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:13:01.506190 (systemd)[1787]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:13:01.509038 systemd-logind[1603]: New session c1 of user core. Dec 12 18:13:01.646446 systemd[1787]: Queued start job for default target default.target. Dec 12 18:13:01.654961 systemd[1787]: Created slice app.slice - User Application Slice. Dec 12 18:13:01.654989 systemd[1787]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 18:13:01.655004 systemd[1787]: Reached target paths.target - Paths. Dec 12 18:13:01.655055 systemd[1787]: Reached target timers.target - Timers. Dec 12 18:13:01.656626 systemd[1787]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:13:01.659751 systemd[1787]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 18:13:01.668996 systemd[1787]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 18:13:01.670171 systemd[1787]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:13:01.670306 systemd[1787]: Reached target sockets.target - Sockets. Dec 12 18:13:01.670457 systemd[1787]: Reached target basic.target - Basic System. Dec 12 18:13:01.670714 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:13:01.671743 systemd[1787]: Reached target default.target - Main User Target. Dec 12 18:13:01.671787 systemd[1787]: Startup finished in 156ms. Dec 12 18:13:01.674657 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:13:01.842204 systemd[1]: Started sshd@1-172.234.28.21:22-139.178.89.65:56646.service - OpenSSH per-connection server daemon (139.178.89.65:56646). Dec 12 18:13:02.144032 sshd[1800]: Accepted publickey for core from 139.178.89.65 port 56646 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:13:02.145906 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:13:02.152592 systemd-logind[1603]: New session 2 of user core. Dec 12 18:13:02.168656 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:13:02.303908 sshd[1803]: Connection closed by 139.178.89.65 port 56646 Dec 12 18:13:02.304744 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Dec 12 18:13:02.309872 systemd[1]: sshd@1-172.234.28.21:22-139.178.89.65:56646.service: Deactivated successfully. Dec 12 18:13:02.312229 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:13:02.313073 systemd-logind[1603]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:13:02.315462 systemd-logind[1603]: Removed session 2. Dec 12 18:13:02.367494 systemd[1]: Started sshd@2-172.234.28.21:22-139.178.89.65:56650.service - OpenSSH per-connection server daemon (139.178.89.65:56650). Dec 12 18:13:02.667462 sshd[1809]: Accepted publickey for core from 139.178.89.65 port 56650 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:13:02.669288 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:13:02.676056 systemd-logind[1603]: New session 3 of user core. Dec 12 18:13:02.681707 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:13:02.821126 sshd[1812]: Connection closed by 139.178.89.65 port 56650 Dec 12 18:13:02.821651 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Dec 12 18:13:02.827600 systemd[1]: sshd@2-172.234.28.21:22-139.178.89.65:56650.service: Deactivated successfully. Dec 12 18:13:02.830275 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:13:02.831433 systemd-logind[1603]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:13:02.834004 systemd-logind[1603]: Removed session 3. Dec 12 18:13:02.883705 systemd[1]: Started sshd@3-172.234.28.21:22-139.178.89.65:56660.service - OpenSSH per-connection server daemon (139.178.89.65:56660). Dec 12 18:13:03.188869 sshd[1818]: Accepted publickey for core from 139.178.89.65 port 56660 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:13:03.191306 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:13:03.197566 systemd-logind[1603]: New session 4 of user core. Dec 12 18:13:03.208879 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:13:03.352491 sshd[1821]: Connection closed by 139.178.89.65 port 56660 Dec 12 18:13:03.353307 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Dec 12 18:13:03.358322 systemd[1]: sshd@3-172.234.28.21:22-139.178.89.65:56660.service: Deactivated successfully. Dec 12 18:13:03.361260 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:13:03.364766 systemd-logind[1603]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:13:03.365949 systemd-logind[1603]: Removed session 4. Dec 12 18:13:03.413381 systemd[1]: Started sshd@4-172.234.28.21:22-139.178.89.65:56666.service - OpenSSH per-connection server daemon (139.178.89.65:56666). Dec 12 18:13:03.727974 sshd[1827]: Accepted publickey for core from 139.178.89.65 port 56666 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:13:03.731326 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:13:03.739496 systemd-logind[1603]: New session 5 of user core. Dec 12 18:13:03.744691 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:13:03.849976 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:13:03.850327 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:13:03.870776 sudo[1831]: pam_unix(sudo:session): session closed for user root Dec 12 18:13:03.922133 sshd[1830]: Connection closed by 139.178.89.65 port 56666 Dec 12 18:13:03.923282 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Dec 12 18:13:03.929371 systemd[1]: sshd@4-172.234.28.21:22-139.178.89.65:56666.service: Deactivated successfully. Dec 12 18:13:03.932027 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:13:03.933893 systemd-logind[1603]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:13:03.935088 systemd-logind[1603]: Removed session 5. Dec 12 18:13:03.986901 systemd[1]: Started sshd@5-172.234.28.21:22-139.178.89.65:56670.service - OpenSSH per-connection server daemon (139.178.89.65:56670). Dec 12 18:13:04.287137 sshd[1837]: Accepted publickey for core from 139.178.89.65 port 56670 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:13:04.289534 sshd-session[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:13:04.295371 systemd-logind[1603]: New session 6 of user core. Dec 12 18:13:04.304679 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:13:04.396254 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:13:04.396654 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:13:04.402392 sudo[1842]: pam_unix(sudo:session): session closed for user root Dec 12 18:13:04.412949 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:13:04.413417 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:13:04.426161 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:13:04.471565 kernel: kauditd_printk_skb: 41 callbacks suppressed Dec 12 18:13:04.471703 kernel: audit: type=1305 audit(1765563184.469:230): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 18:13:04.469000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 18:13:04.471777 augenrules[1864]: No rules Dec 12 18:13:04.475474 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:13:04.477243 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:13:04.482999 kernel: audit: type=1300 audit(1765563184.469:230): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff14db9970 a2=420 a3=0 items=0 ppid=1845 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:04.469000 audit[1864]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff14db9970 a2=420 a3=0 items=0 ppid=1845 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:04.480327 sudo[1841]: pam_unix(sudo:session): session closed for user root Dec 12 18:13:04.469000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:13:04.490054 kernel: audit: type=1327 audit(1765563184.469:230): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 18:13:04.490104 kernel: audit: type=1130 audit(1765563184.478:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:04.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:04.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:04.496877 kernel: audit: type=1131 audit(1765563184.478:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:04.478000 audit[1841]: USER_END pid=1841 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:13:04.502424 kernel: audit: type=1106 audit(1765563184.478:233): pid=1841 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:13:04.478000 audit[1841]: CRED_DISP pid=1841 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:13:04.508618 kernel: audit: type=1104 audit(1765563184.478:234): pid=1841 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:13:04.533357 sshd[1840]: Connection closed by 139.178.89.65 port 56670 Dec 12 18:13:04.534132 sshd-session[1837]: pam_unix(sshd:session): session closed for user core Dec 12 18:13:04.536000 audit[1837]: USER_END pid=1837 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:13:04.540300 systemd[1]: sshd@5-172.234.28.21:22-139.178.89.65:56670.service: Deactivated successfully. Dec 12 18:13:04.543670 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:13:04.545908 kernel: audit: type=1106 audit(1765563184.536:235): pid=1837 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:13:04.536000 audit[1837]: CRED_DISP pid=1837 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:13:04.546653 systemd-logind[1603]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:13:04.548389 systemd-logind[1603]: Removed session 6. Dec 12 18:13:04.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.234.28.21:22-139.178.89.65:56670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:04.553795 kernel: audit: type=1104 audit(1765563184.536:236): pid=1837 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:13:04.553836 kernel: audit: type=1131 audit(1765563184.538:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.234.28.21:22-139.178.89.65:56670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:04.595377 systemd[1]: Started sshd@6-172.234.28.21:22-139.178.89.65:56676.service - OpenSSH per-connection server daemon (139.178.89.65:56676). Dec 12 18:13:04.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.234.28.21:22-139.178.89.65:56676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:04.900000 audit[1873]: USER_ACCT pid=1873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:13:04.901550 sshd[1873]: Accepted publickey for core from 139.178.89.65 port 56676 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:13:04.902000 audit[1873]: CRED_ACQ pid=1873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:13:04.902000 audit[1873]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe779166c0 a2=3 a3=0 items=0 ppid=1 pid=1873 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:04.902000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:13:04.903321 sshd-session[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:13:04.909239 systemd-logind[1603]: New session 7 of user core. Dec 12 18:13:04.919280 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:13:04.922000 audit[1873]: USER_START pid=1873 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:13:04.925000 audit[1876]: CRED_ACQ pid=1876 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:13:05.014000 audit[1877]: USER_ACCT pid=1877 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:13:05.014750 sudo[1877]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:13:05.014000 audit[1877]: CRED_REFR pid=1877 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:13:05.015210 sudo[1877]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:13:05.017000 audit[1877]: USER_START pid=1877 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:13:05.345742 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:13:05.365001 (dockerd)[1894]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:13:05.604098 dockerd[1894]: time="2025-12-12T18:13:05.603866750Z" level=info msg="Starting up" Dec 12 18:13:05.605088 dockerd[1894]: time="2025-12-12T18:13:05.604863220Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:13:05.616335 dockerd[1894]: time="2025-12-12T18:13:05.616234590Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:13:05.656716 dockerd[1894]: time="2025-12-12T18:13:05.656576920Z" level=info msg="Loading containers: start." Dec 12 18:13:05.667538 kernel: Initializing XFRM netlink socket Dec 12 18:13:05.733000 audit[1943]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.733000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcb45ad690 a2=0 a3=0 items=0 ppid=1894 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.733000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 18:13:05.735000 audit[1945]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.735000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffb11105d0 a2=0 a3=0 items=0 ppid=1894 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.735000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 18:13:05.738000 audit[1947]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.738000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedfa32860 a2=0 a3=0 items=0 ppid=1894 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 18:13:05.740000 audit[1949]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.740000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8bc81e40 a2=0 a3=0 items=0 ppid=1894 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.740000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 18:13:05.743000 audit[1951]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.743000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc58246950 a2=0 a3=0 items=0 ppid=1894 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.743000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 18:13:05.745000 audit[1953]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.745000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffd2d3acd0 a2=0 a3=0 items=0 ppid=1894 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.745000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:13:05.747000 audit[1955]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.747000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff84bf7c70 a2=0 a3=0 items=0 ppid=1894 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.747000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:13:05.750000 audit[1957]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.750000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe90b3c4b0 a2=0 a3=0 items=0 ppid=1894 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.750000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 18:13:05.782000 audit[1960]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.782000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff37b8f2a0 a2=0 a3=0 items=0 ppid=1894 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.782000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 18:13:05.785000 audit[1962]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.785000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc39d35a40 a2=0 a3=0 items=0 ppid=1894 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.785000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 18:13:05.787000 audit[1964]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.787000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff3c724720 a2=0 a3=0 items=0 ppid=1894 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.787000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 18:13:05.790000 audit[1966]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.790000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe75549380 a2=0 a3=0 items=0 ppid=1894 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.790000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:13:05.792000 audit[1968]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.792000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdb43ec500 a2=0 a3=0 items=0 ppid=1894 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.792000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 18:13:05.835000 audit[1998]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.835000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe1ca8bad0 a2=0 a3=0 items=0 ppid=1894 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.835000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 18:13:05.838000 audit[2000]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.838000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdd8b36010 a2=0 a3=0 items=0 ppid=1894 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.838000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 18:13:05.840000 audit[2002]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.840000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff32bbaaf0 a2=0 a3=0 items=0 ppid=1894 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.840000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 18:13:05.842000 audit[2004]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.842000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec729b3e0 a2=0 a3=0 items=0 ppid=1894 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.842000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 18:13:05.845000 audit[2006]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.845000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffef6219960 a2=0 a3=0 items=0 ppid=1894 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 18:13:05.847000 audit[2008]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.847000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcc94dfcf0 a2=0 a3=0 items=0 ppid=1894 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:13:05.849000 audit[2010]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.849000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeb3f155c0 a2=0 a3=0 items=0 ppid=1894 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:13:05.852000 audit[2012]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.852000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe04b184c0 a2=0 a3=0 items=0 ppid=1894 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.852000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 18:13:05.855000 audit[2014]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.855000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd8564a760 a2=0 a3=0 items=0 ppid=1894 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.855000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 18:13:05.858000 audit[2016]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.858000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff2f6d7fb0 a2=0 a3=0 items=0 ppid=1894 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.858000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 18:13:05.860000 audit[2018]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.860000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc2b2045d0 a2=0 a3=0 items=0 ppid=1894 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.860000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 18:13:05.862000 audit[2020]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.862000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffd83bda20 a2=0 a3=0 items=0 ppid=1894 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.862000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 18:13:05.865000 audit[2022]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.865000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff8ef6c5e0 a2=0 a3=0 items=0 ppid=1894 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.865000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 18:13:05.871000 audit[2027]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.871000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff8f6c98e0 a2=0 a3=0 items=0 ppid=1894 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.871000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 18:13:05.874000 audit[2029]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.874000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffccc0a27c0 a2=0 a3=0 items=0 ppid=1894 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.874000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 18:13:05.876000 audit[2031]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.876000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff7105ee30 a2=0 a3=0 items=0 ppid=1894 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.876000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 18:13:05.878000 audit[2033]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.878000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcd92d7b60 a2=0 a3=0 items=0 ppid=1894 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.878000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 18:13:05.881000 audit[2035]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.881000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffe3547450 a2=0 a3=0 items=0 ppid=1894 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.881000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 18:13:05.883000 audit[2037]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:05.883000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdeb6c6570 a2=0 a3=0 items=0 ppid=1894 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.883000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 18:13:05.892767 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Dec 12 18:13:05.895650 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Dec 12 18:13:05.901050 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Dec 12 18:13:05.903000 audit[2041]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.903000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffcec97a310 a2=0 a3=0 items=0 ppid=1894 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.903000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 18:13:05.908000 audit[2045]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.908000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe6a3b42e0 a2=0 a3=0 items=0 ppid=1894 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 18:13:05.919000 audit[2053]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.919000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd3dc2caf0 a2=0 a3=0 items=0 ppid=1894 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.919000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 18:13:05.931000 audit[2059]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.931000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd3f601830 a2=0 a3=0 items=0 ppid=1894 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.931000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 18:13:05.934000 audit[2061]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.934000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fffec57ba00 a2=0 a3=0 items=0 ppid=1894 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.934000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 18:13:05.936000 audit[2063]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.936000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff372fdca0 a2=0 a3=0 items=0 ppid=1894 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.936000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 18:13:05.939000 audit[2065]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.939000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcd10f4d80 a2=0 a3=0 items=0 ppid=1894 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.939000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 18:13:05.941000 audit[2067]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:05.941000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc77789980 a2=0 a3=0 items=0 ppid=1894 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:05.941000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 18:13:05.942666 systemd-networkd[1530]: docker0: Link UP Dec 12 18:13:05.942953 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Dec 12 18:13:05.945995 dockerd[1894]: time="2025-12-12T18:13:05.945929730Z" level=info msg="Loading containers: done." Dec 12 18:13:05.961730 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1017589683-merged.mount: Deactivated successfully. Dec 12 18:13:05.965945 dockerd[1894]: time="2025-12-12T18:13:05.965898500Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:13:05.966199 dockerd[1894]: time="2025-12-12T18:13:05.966160910Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:13:05.966295 dockerd[1894]: time="2025-12-12T18:13:05.966278350Z" level=info msg="Initializing buildkit" Dec 12 18:13:05.989143 dockerd[1894]: time="2025-12-12T18:13:05.989103940Z" level=info msg="Completed buildkit initialization" Dec 12 18:13:05.996663 dockerd[1894]: time="2025-12-12T18:13:05.996618160Z" level=info msg="Daemon has completed initialization" Dec 12 18:13:05.997386 dockerd[1894]: time="2025-12-12T18:13:05.996838110Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:13:05.996961 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:13:05.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:06.529291 containerd[1629]: time="2025-12-12T18:13:06.529245740Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 18:13:07.255717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount791951094.mount: Deactivated successfully. Dec 12 18:13:08.172064 containerd[1629]: time="2025-12-12T18:13:08.171701740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:08.176183 containerd[1629]: time="2025-12-12T18:13:08.174254550Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=27403437" Dec 12 18:13:08.176183 containerd[1629]: time="2025-12-12T18:13:08.174824890Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:08.177636 containerd[1629]: time="2025-12-12T18:13:08.177591230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:08.178792 containerd[1629]: time="2025-12-12T18:13:08.178306190Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 1.64902066s" Dec 12 18:13:08.178792 containerd[1629]: time="2025-12-12T18:13:08.178352630Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 12 18:13:08.179904 containerd[1629]: time="2025-12-12T18:13:08.179878710Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 18:13:09.742342 containerd[1629]: time="2025-12-12T18:13:09.741562610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:09.742342 containerd[1629]: time="2025-12-12T18:13:09.742310020Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24983855" Dec 12 18:13:09.742974 containerd[1629]: time="2025-12-12T18:13:09.742952440Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:09.745439 containerd[1629]: time="2025-12-12T18:13:09.745388150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:09.746076 containerd[1629]: time="2025-12-12T18:13:09.746032120Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.56612713s" Dec 12 18:13:09.746125 containerd[1629]: time="2025-12-12T18:13:09.746078530Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 12 18:13:09.747527 containerd[1629]: time="2025-12-12T18:13:09.747430160Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 18:13:09.921204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:13:09.924027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:13:10.114536 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 12 18:13:10.114650 kernel: audit: type=1130 audit(1765563190.112:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:10.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:10.112538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:13:10.128904 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:13:10.173101 kubelet[2177]: E1212 18:13:10.173011 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:13:10.178327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:13:10.178566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:13:10.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:13:10.179085 systemd[1]: kubelet.service: Consumed 195ms CPU time, 109M memory peak. Dec 12 18:13:10.185540 kernel: audit: type=1131 audit(1765563190.178:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:13:11.052377 containerd[1629]: time="2025-12-12T18:13:11.052310900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:11.053661 containerd[1629]: time="2025-12-12T18:13:11.053367000Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19396111" Dec 12 18:13:11.054262 containerd[1629]: time="2025-12-12T18:13:11.054236210Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:11.056301 containerd[1629]: time="2025-12-12T18:13:11.056272610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:11.057220 containerd[1629]: time="2025-12-12T18:13:11.057198340Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.30973875s" Dec 12 18:13:11.057321 containerd[1629]: time="2025-12-12T18:13:11.057298680Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 12 18:13:11.057908 containerd[1629]: time="2025-12-12T18:13:11.057824770Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 18:13:12.290597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1334101774.mount: Deactivated successfully. Dec 12 18:13:12.626836 containerd[1629]: time="2025-12-12T18:13:12.626659080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:12.629384 containerd[1629]: time="2025-12-12T18:13:12.628945330Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31157702" Dec 12 18:13:12.629938 containerd[1629]: time="2025-12-12T18:13:12.629905540Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:12.631302 containerd[1629]: time="2025-12-12T18:13:12.631272980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:12.631831 containerd[1629]: time="2025-12-12T18:13:12.631799040Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.57393826s" Dec 12 18:13:12.631880 containerd[1629]: time="2025-12-12T18:13:12.631831930Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 12 18:13:12.632357 containerd[1629]: time="2025-12-12T18:13:12.632322240Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 18:13:13.280858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3048940613.mount: Deactivated successfully. Dec 12 18:13:13.923092 containerd[1629]: time="2025-12-12T18:13:13.922307150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:13.923092 containerd[1629]: time="2025-12-12T18:13:13.923060120Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17570148" Dec 12 18:13:13.923636 containerd[1629]: time="2025-12-12T18:13:13.923588480Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:13.925901 containerd[1629]: time="2025-12-12T18:13:13.925870570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:13.926881 containerd[1629]: time="2025-12-12T18:13:13.926857320Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.29450855s" Dec 12 18:13:13.926944 containerd[1629]: time="2025-12-12T18:13:13.926883160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 12 18:13:13.927396 containerd[1629]: time="2025-12-12T18:13:13.927375100Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:13:14.507379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4250826851.mount: Deactivated successfully. Dec 12 18:13:14.511333 containerd[1629]: time="2025-12-12T18:13:14.511298680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:13:14.511968 containerd[1629]: time="2025-12-12T18:13:14.511950350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:13:14.513273 containerd[1629]: time="2025-12-12T18:13:14.512423290Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:13:14.514330 containerd[1629]: time="2025-12-12T18:13:14.514287750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:13:14.514802 containerd[1629]: time="2025-12-12T18:13:14.514781800Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 587.38196ms" Dec 12 18:13:14.514874 containerd[1629]: time="2025-12-12T18:13:14.514860680Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:13:14.515684 containerd[1629]: time="2025-12-12T18:13:14.515662610Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 18:13:15.202977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1803248716.mount: Deactivated successfully. Dec 12 18:13:16.709431 containerd[1629]: time="2025-12-12T18:13:16.709363820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:16.710921 containerd[1629]: time="2025-12-12T18:13:16.710182930Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Dec 12 18:13:16.711685 containerd[1629]: time="2025-12-12T18:13:16.711324720Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:16.714393 containerd[1629]: time="2025-12-12T18:13:16.714367910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:16.715237 containerd[1629]: time="2025-12-12T18:13:16.715210440Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.19952042s" Dec 12 18:13:16.715289 containerd[1629]: time="2025-12-12T18:13:16.715239130Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 12 18:13:18.303856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:13:18.304062 systemd[1]: kubelet.service: Consumed 195ms CPU time, 109M memory peak. Dec 12 18:13:18.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:18.313637 kernel: audit: type=1130 audit(1765563198.303:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:18.310708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:13:18.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:18.321545 kernel: audit: type=1131 audit(1765563198.303:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:18.345149 systemd[1]: Reload requested from client PID 2332 ('systemctl') (unit session-7.scope)... Dec 12 18:13:18.345284 systemd[1]: Reloading... Dec 12 18:13:18.493528 zram_generator::config[2379]: No configuration found. Dec 12 18:13:18.702539 systemd[1]: Reloading finished in 356 ms. Dec 12 18:13:18.736000 audit: BPF prog-id=67 op=LOAD Dec 12 18:13:18.739564 kernel: audit: type=1334 audit(1765563198.736:292): prog-id=67 op=LOAD Dec 12 18:13:18.736000 audit: BPF prog-id=50 op=UNLOAD Dec 12 18:13:18.742534 kernel: audit: type=1334 audit(1765563198.736:293): prog-id=50 op=UNLOAD Dec 12 18:13:18.736000 audit: BPF prog-id=68 op=LOAD Dec 12 18:13:18.736000 audit: BPF prog-id=69 op=LOAD Dec 12 18:13:18.748584 kernel: audit: type=1334 audit(1765563198.736:294): prog-id=68 op=LOAD Dec 12 18:13:18.748634 kernel: audit: type=1334 audit(1765563198.736:295): prog-id=69 op=LOAD Dec 12 18:13:18.748664 kernel: audit: type=1334 audit(1765563198.736:296): prog-id=51 op=UNLOAD Dec 12 18:13:18.736000 audit: BPF prog-id=51 op=UNLOAD Dec 12 18:13:18.750536 kernel: audit: type=1334 audit(1765563198.736:297): prog-id=52 op=UNLOAD Dec 12 18:13:18.736000 audit: BPF prog-id=52 op=UNLOAD Dec 12 18:13:18.740000 audit: BPF prog-id=70 op=LOAD Dec 12 18:13:18.752968 kernel: audit: type=1334 audit(1765563198.740:298): prog-id=70 op=LOAD Dec 12 18:13:18.740000 audit: BPF prog-id=60 op=UNLOAD Dec 12 18:13:18.740000 audit: BPF prog-id=71 op=LOAD Dec 12 18:13:18.740000 audit: BPF prog-id=72 op=LOAD Dec 12 18:13:18.740000 audit: BPF prog-id=61 op=UNLOAD Dec 12 18:13:18.740000 audit: BPF prog-id=62 op=UNLOAD Dec 12 18:13:18.740000 audit: BPF prog-id=73 op=LOAD Dec 12 18:13:18.740000 audit: BPF prog-id=55 op=UNLOAD Dec 12 18:13:18.740000 audit: BPF prog-id=74 op=LOAD Dec 12 18:13:18.740000 audit: BPF prog-id=75 op=LOAD Dec 12 18:13:18.740000 audit: BPF prog-id=56 op=UNLOAD Dec 12 18:13:18.740000 audit: BPF prog-id=57 op=UNLOAD Dec 12 18:13:18.742000 audit: BPF prog-id=76 op=LOAD Dec 12 18:13:18.742000 audit: BPF prog-id=66 op=UNLOAD Dec 12 18:13:18.761610 kernel: audit: type=1334 audit(1765563198.740:299): prog-id=60 op=UNLOAD Dec 12 18:13:18.743000 audit: BPF prog-id=77 op=LOAD Dec 12 18:13:18.743000 audit: BPF prog-id=78 op=LOAD Dec 12 18:13:18.743000 audit: BPF prog-id=53 op=UNLOAD Dec 12 18:13:18.743000 audit: BPF prog-id=54 op=UNLOAD Dec 12 18:13:18.744000 audit: BPF prog-id=79 op=LOAD Dec 12 18:13:18.744000 audit: BPF prog-id=59 op=UNLOAD Dec 12 18:13:18.744000 audit: BPF prog-id=80 op=LOAD Dec 12 18:13:18.746000 audit: BPF prog-id=46 op=UNLOAD Dec 12 18:13:18.746000 audit: BPF prog-id=81 op=LOAD Dec 12 18:13:18.752000 audit: BPF prog-id=82 op=LOAD Dec 12 18:13:18.752000 audit: BPF prog-id=47 op=UNLOAD Dec 12 18:13:18.752000 audit: BPF prog-id=48 op=UNLOAD Dec 12 18:13:18.752000 audit: BPF prog-id=83 op=LOAD Dec 12 18:13:18.752000 audit: BPF prog-id=58 op=UNLOAD Dec 12 18:13:18.752000 audit: BPF prog-id=84 op=LOAD Dec 12 18:13:18.752000 audit: BPF prog-id=49 op=UNLOAD Dec 12 18:13:18.752000 audit: BPF prog-id=85 op=LOAD Dec 12 18:13:18.752000 audit: BPF prog-id=63 op=UNLOAD Dec 12 18:13:18.752000 audit: BPF prog-id=86 op=LOAD Dec 12 18:13:18.752000 audit: BPF prog-id=87 op=LOAD Dec 12 18:13:18.752000 audit: BPF prog-id=64 op=UNLOAD Dec 12 18:13:18.752000 audit: BPF prog-id=65 op=UNLOAD Dec 12 18:13:18.757000 audit: BPF prog-id=88 op=LOAD Dec 12 18:13:18.757000 audit: BPF prog-id=43 op=UNLOAD Dec 12 18:13:18.757000 audit: BPF prog-id=89 op=LOAD Dec 12 18:13:18.757000 audit: BPF prog-id=90 op=LOAD Dec 12 18:13:18.757000 audit: BPF prog-id=44 op=UNLOAD Dec 12 18:13:18.757000 audit: BPF prog-id=45 op=UNLOAD Dec 12 18:13:18.774441 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:13:18.774566 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:13:18.774950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:13:18.775005 systemd[1]: kubelet.service: Consumed 134ms CPU time, 98.6M memory peak. Dec 12 18:13:18.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 18:13:18.776712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:13:18.960818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:13:18.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:18.971022 (kubelet)[2433]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:13:19.010168 kubelet[2433]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:13:19.010168 kubelet[2433]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:13:19.010168 kubelet[2433]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:13:19.010427 kubelet[2433]: I1212 18:13:19.010219 2433 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:13:19.545183 kubelet[2433]: I1212 18:13:19.545126 2433 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:13:19.545183 kubelet[2433]: I1212 18:13:19.545156 2433 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:13:19.545433 kubelet[2433]: I1212 18:13:19.545406 2433 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:13:19.576922 kubelet[2433]: I1212 18:13:19.576579 2433 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:13:19.576922 kubelet[2433]: E1212 18:13:19.576814 2433 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.28.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.28.21:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:13:19.588383 kubelet[2433]: I1212 18:13:19.588352 2433 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:13:19.592046 kubelet[2433]: I1212 18:13:19.592004 2433 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:13:19.592305 kubelet[2433]: I1212 18:13:19.592271 2433 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:13:19.592476 kubelet[2433]: I1212 18:13:19.592302 2433 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-28-21","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:13:19.593128 kubelet[2433]: I1212 18:13:19.593100 2433 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:13:19.593128 kubelet[2433]: I1212 18:13:19.593121 2433 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:13:19.593293 kubelet[2433]: I1212 18:13:19.593266 2433 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:13:19.597000 kubelet[2433]: I1212 18:13:19.596888 2433 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:13:19.597000 kubelet[2433]: I1212 18:13:19.596918 2433 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:13:19.597000 kubelet[2433]: I1212 18:13:19.596941 2433 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:13:19.597000 kubelet[2433]: I1212 18:13:19.596952 2433 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:13:19.604037 kubelet[2433]: W1212 18:13:19.603987 2433 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.28.21:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-28-21&limit=500&resourceVersion=0": dial tcp 172.234.28.21:6443: connect: connection refused Dec 12 18:13:19.604155 kubelet[2433]: E1212 18:13:19.604133 2433 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.28.21:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-28-21&limit=500&resourceVersion=0\": dial tcp 172.234.28.21:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:13:19.604336 kubelet[2433]: I1212 18:13:19.604317 2433 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 18:13:19.604755 kubelet[2433]: I1212 18:13:19.604741 2433 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:13:19.604852 kubelet[2433]: W1212 18:13:19.604841 2433 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:13:19.608822 kubelet[2433]: W1212 18:13:19.608419 2433 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.28.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.28.21:6443: connect: connection refused Dec 12 18:13:19.608822 kubelet[2433]: E1212 18:13:19.608464 2433 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.28.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.28.21:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:13:19.608972 kubelet[2433]: I1212 18:13:19.608936 2433 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:13:19.609039 kubelet[2433]: I1212 18:13:19.608991 2433 server.go:1287] "Started kubelet" Dec 12 18:13:19.609581 kubelet[2433]: I1212 18:13:19.609547 2433 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:13:19.610587 kubelet[2433]: I1212 18:13:19.610561 2433 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:13:19.612798 kubelet[2433]: I1212 18:13:19.612545 2433 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:13:19.613536 kubelet[2433]: I1212 18:13:19.612979 2433 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:13:19.614201 kubelet[2433]: I1212 18:13:19.614173 2433 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:13:19.615227 kubelet[2433]: E1212 18:13:19.614100 2433 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.28.21:6443/api/v1/namespaces/default/events\": dial tcp 172.234.28.21:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-28-21.18808a602d319b0a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-28-21,UID:172-234-28-21,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-28-21,},FirstTimestamp:2025-12-12 18:13:19.60895361 +0000 UTC m=+0.633679911,LastTimestamp:2025-12-12 18:13:19.60895361 +0000 UTC m=+0.633679911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-28-21,}" Dec 12 18:13:19.617784 kubelet[2433]: I1212 18:13:19.616899 2433 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:13:19.620572 kubelet[2433]: E1212 18:13:19.620556 2433 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:13:19.620770 kubelet[2433]: E1212 18:13:19.620758 2433 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-28-21\" not found" Dec 12 18:13:19.620858 kubelet[2433]: I1212 18:13:19.620849 2433 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:13:19.621036 kubelet[2433]: I1212 18:13:19.621022 2433 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:13:19.621130 kubelet[2433]: I1212 18:13:19.621120 2433 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:13:19.620000 audit[2444]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:19.620000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd341a1c00 a2=0 a3=0 items=0 ppid=2433 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:19.620000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 18:13:19.621919 kubelet[2433]: W1212 18:13:19.621892 2433 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.28.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.28.21:6443: connect: connection refused Dec 12 18:13:19.622017 kubelet[2433]: E1212 18:13:19.622003 2433 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.28.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.28.21:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:13:19.622239 kubelet[2433]: I1212 18:13:19.622223 2433 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:13:19.622000 audit[2445]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:19.622000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6ecb8c10 a2=0 a3=0 items=0 ppid=2433 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:19.622000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 18:13:19.623821 kubelet[2433]: E1212 18:13:19.623759 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.28.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-28-21?timeout=10s\": dial tcp 172.234.28.21:6443: connect: connection refused" interval="200ms" Dec 12 18:13:19.623969 kubelet[2433]: I1212 18:13:19.623951 2433 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:13:19.624046 kubelet[2433]: I1212 18:13:19.624035 2433 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:13:19.624000 audit[2447]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:19.624000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdec99a420 a2=0 a3=0 items=0 ppid=2433 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:19.624000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:13:19.627000 audit[2449]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:19.627000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe5388f910 a2=0 a3=0 items=0 ppid=2433 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:19.627000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:13:19.635000 audit[2452]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:19.635000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd4894ac60 a2=0 a3=0 items=0 ppid=2433 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:19.635000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 18:13:19.637569 kubelet[2433]: I1212 18:13:19.637023 2433 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:13:19.638000 audit[2453]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:19.638000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd871d21e0 a2=0 a3=0 items=0 ppid=2433 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:19.638000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 18:13:19.639934 kubelet[2433]: I1212 18:13:19.639905 2433 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:13:19.639934 kubelet[2433]: I1212 18:13:19.639929 2433 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:13:19.639991 kubelet[2433]: I1212 18:13:19.639949 2433 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:13:19.639991 kubelet[2433]: I1212 18:13:19.639957 2433 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:13:19.640055 kubelet[2433]: E1212 18:13:19.640016 2433 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:13:19.640000 audit[2455]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:19.640000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffcdaf6170 a2=0 a3=0 items=0 ppid=2433 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:19.640000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 18:13:19.642000 audit[2456]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:19.642000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff482e8260 a2=0 a3=0 items=0 ppid=2433 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:19.642000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 18:13:19.644000 audit[2458]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:19.644000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf8572fd0 a2=0 a3=0 items=0 ppid=2433 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:19.644000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 18:13:19.646000 audit[2460]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:19.646000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcff14d580 a2=0 a3=0 items=0 ppid=2433 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:19.646000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 18:13:19.647000 audit[2461]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:19.647000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6390d2c0 a2=0 a3=0 items=0 ppid=2433 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:19.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 18:13:19.649000 audit[2462]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:19.649000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb6761d30 a2=0 a3=0 items=0 ppid=2433 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:19.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 18:13:19.651249 kubelet[2433]: W1212 18:13:19.651203 2433 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.28.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.28.21:6443: connect: connection refused Dec 12 18:13:19.651306 kubelet[2433]: E1212 18:13:19.651258 2433 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.28.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.28.21:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:13:19.656108 kubelet[2433]: I1212 18:13:19.656078 2433 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:13:19.656108 kubelet[2433]: I1212 18:13:19.656092 2433 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:13:19.656108 kubelet[2433]: I1212 18:13:19.656107 2433 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:13:19.658021 kubelet[2433]: I1212 18:13:19.657980 2433 policy_none.go:49] "None policy: Start" Dec 12 18:13:19.658021 kubelet[2433]: I1212 18:13:19.658000 2433 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:13:19.658021 kubelet[2433]: I1212 18:13:19.658013 2433 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:13:19.664590 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:13:19.683847 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:13:19.694550 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:13:19.697587 kubelet[2433]: I1212 18:13:19.697524 2433 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:13:19.697746 kubelet[2433]: I1212 18:13:19.697731 2433 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:13:19.697774 kubelet[2433]: I1212 18:13:19.697747 2433 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:13:19.698006 kubelet[2433]: I1212 18:13:19.697978 2433 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:13:19.699686 kubelet[2433]: E1212 18:13:19.699659 2433 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:13:19.699735 kubelet[2433]: E1212 18:13:19.699690 2433 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-28-21\" not found" Dec 12 18:13:19.753036 systemd[1]: Created slice kubepods-burstable-pod7d402a1e1decf8950f6d505b0e31998c.slice - libcontainer container kubepods-burstable-pod7d402a1e1decf8950f6d505b0e31998c.slice. Dec 12 18:13:19.772964 kubelet[2433]: E1212 18:13:19.772714 2433 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-28-21\" not found" node="172-234-28-21" Dec 12 18:13:19.775402 systemd[1]: Created slice kubepods-burstable-podfabc874faa7d1cceb9a27f2090c29f50.slice - libcontainer container kubepods-burstable-podfabc874faa7d1cceb9a27f2090c29f50.slice. Dec 12 18:13:19.783896 kubelet[2433]: E1212 18:13:19.783835 2433 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-28-21\" not found" node="172-234-28-21" Dec 12 18:13:19.787831 systemd[1]: Created slice kubepods-burstable-podc1b922e9067be7eb88098b58eebafcb2.slice - libcontainer container kubepods-burstable-podc1b922e9067be7eb88098b58eebafcb2.slice. Dec 12 18:13:19.789734 kubelet[2433]: E1212 18:13:19.789706 2433 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-28-21\" not found" node="172-234-28-21" Dec 12 18:13:19.801536 kubelet[2433]: I1212 18:13:19.799927 2433 kubelet_node_status.go:75] "Attempting to register node" node="172-234-28-21" Dec 12 18:13:19.801536 kubelet[2433]: E1212 18:13:19.800256 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.28.21:6443/api/v1/nodes\": dial tcp 172.234.28.21:6443: connect: connection refused" node="172-234-28-21" Dec 12 18:13:19.821972 kubelet[2433]: I1212 18:13:19.821931 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d402a1e1decf8950f6d505b0e31998c-k8s-certs\") pod \"kube-apiserver-172-234-28-21\" (UID: \"7d402a1e1decf8950f6d505b0e31998c\") " pod="kube-system/kube-apiserver-172-234-28-21" Dec 12 18:13:19.821972 kubelet[2433]: I1212 18:13:19.821960 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fabc874faa7d1cceb9a27f2090c29f50-ca-certs\") pod \"kube-controller-manager-172-234-28-21\" (UID: \"fabc874faa7d1cceb9a27f2090c29f50\") " pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:19.822071 kubelet[2433]: I1212 18:13:19.821979 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fabc874faa7d1cceb9a27f2090c29f50-k8s-certs\") pod \"kube-controller-manager-172-234-28-21\" (UID: \"fabc874faa7d1cceb9a27f2090c29f50\") " pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:19.822071 kubelet[2433]: I1212 18:13:19.821994 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fabc874faa7d1cceb9a27f2090c29f50-kubeconfig\") pod \"kube-controller-manager-172-234-28-21\" (UID: \"fabc874faa7d1cceb9a27f2090c29f50\") " pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:19.822071 kubelet[2433]: I1212 18:13:19.822009 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1b922e9067be7eb88098b58eebafcb2-kubeconfig\") pod \"kube-scheduler-172-234-28-21\" (UID: \"c1b922e9067be7eb88098b58eebafcb2\") " pod="kube-system/kube-scheduler-172-234-28-21" Dec 12 18:13:19.822071 kubelet[2433]: I1212 18:13:19.822023 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fabc874faa7d1cceb9a27f2090c29f50-flexvolume-dir\") pod \"kube-controller-manager-172-234-28-21\" (UID: \"fabc874faa7d1cceb9a27f2090c29f50\") " pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:19.822071 kubelet[2433]: I1212 18:13:19.822045 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fabc874faa7d1cceb9a27f2090c29f50-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-28-21\" (UID: \"fabc874faa7d1cceb9a27f2090c29f50\") " pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:19.822243 kubelet[2433]: I1212 18:13:19.822060 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d402a1e1decf8950f6d505b0e31998c-ca-certs\") pod \"kube-apiserver-172-234-28-21\" (UID: \"7d402a1e1decf8950f6d505b0e31998c\") " pod="kube-system/kube-apiserver-172-234-28-21" Dec 12 18:13:19.822243 kubelet[2433]: I1212 18:13:19.822074 2433 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d402a1e1decf8950f6d505b0e31998c-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-28-21\" (UID: \"7d402a1e1decf8950f6d505b0e31998c\") " pod="kube-system/kube-apiserver-172-234-28-21" Dec 12 18:13:19.824250 kubelet[2433]: E1212 18:13:19.824211 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.28.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-28-21?timeout=10s\": dial tcp 172.234.28.21:6443: connect: connection refused" interval="400ms" Dec 12 18:13:20.003428 kubelet[2433]: I1212 18:13:20.003107 2433 kubelet_node_status.go:75] "Attempting to register node" node="172-234-28-21" Dec 12 18:13:20.003775 kubelet[2433]: E1212 18:13:20.003713 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.28.21:6443/api/v1/nodes\": dial tcp 172.234.28.21:6443: connect: connection refused" node="172-234-28-21" Dec 12 18:13:20.073646 kubelet[2433]: E1212 18:13:20.073425 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:20.074755 containerd[1629]: time="2025-12-12T18:13:20.074723290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-28-21,Uid:7d402a1e1decf8950f6d505b0e31998c,Namespace:kube-system,Attempt:0,}" Dec 12 18:13:20.085404 kubelet[2433]: E1212 18:13:20.085377 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:20.086544 containerd[1629]: time="2025-12-12T18:13:20.085985750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-28-21,Uid:fabc874faa7d1cceb9a27f2090c29f50,Namespace:kube-system,Attempt:0,}" Dec 12 18:13:20.090441 kubelet[2433]: E1212 18:13:20.090257 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:20.090543 containerd[1629]: time="2025-12-12T18:13:20.090497600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-28-21,Uid:c1b922e9067be7eb88098b58eebafcb2,Namespace:kube-system,Attempt:0,}" Dec 12 18:13:20.095680 containerd[1629]: time="2025-12-12T18:13:20.095603180Z" level=info msg="connecting to shim 451c0e24ebbeafb2203fa5a23a972457e53310a5f76dc354ed09a7bf88e0b36c" address="unix:///run/containerd/s/f75f899500389df73af81427cf94245d30acb78b2a3d49bdadff5096cd0f4689" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:13:20.122975 containerd[1629]: time="2025-12-12T18:13:20.122758710Z" level=info msg="connecting to shim 0f8b1b9ca42bab166286bfd1b4c1ca04478ec83f3e0cacbd89a0338d62efab79" address="unix:///run/containerd/s/9b04feb2b8b254be7cfbcd248e1742c16920d21ec030e0a35ad23e818d7706f4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:13:20.137739 systemd[1]: Started cri-containerd-451c0e24ebbeafb2203fa5a23a972457e53310a5f76dc354ed09a7bf88e0b36c.scope - libcontainer container 451c0e24ebbeafb2203fa5a23a972457e53310a5f76dc354ed09a7bf88e0b36c. Dec 12 18:13:20.145706 containerd[1629]: time="2025-12-12T18:13:20.145676420Z" level=info msg="connecting to shim ec01075cabd2d888fd7c6c40bb834e168f73e9aaabd12f0a6e36e0831575c47e" address="unix:///run/containerd/s/7c485a4c44ec22472e6b47095ade7b0b1e3698f923f06b4b9f09b9261d20816b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:13:20.174709 systemd[1]: Started cri-containerd-0f8b1b9ca42bab166286bfd1b4c1ca04478ec83f3e0cacbd89a0338d62efab79.scope - libcontainer container 0f8b1b9ca42bab166286bfd1b4c1ca04478ec83f3e0cacbd89a0338d62efab79. Dec 12 18:13:20.178000 audit: BPF prog-id=91 op=LOAD Dec 12 18:13:20.179000 audit: BPF prog-id=92 op=LOAD Dec 12 18:13:20.179000 audit[2484]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e238 a2=98 a3=0 items=0 ppid=2472 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435316330653234656262656166623232303366613561323361393732 Dec 12 18:13:20.179000 audit: BPF prog-id=92 op=UNLOAD Dec 12 18:13:20.179000 audit[2484]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2472 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435316330653234656262656166623232303366613561323361393732 Dec 12 18:13:20.180000 audit: BPF prog-id=93 op=LOAD Dec 12 18:13:20.180000 audit[2484]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e488 a2=98 a3=0 items=0 ppid=2472 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435316330653234656262656166623232303366613561323361393732 Dec 12 18:13:20.180000 audit: BPF prog-id=94 op=LOAD Dec 12 18:13:20.181975 systemd[1]: Started cri-containerd-ec01075cabd2d888fd7c6c40bb834e168f73e9aaabd12f0a6e36e0831575c47e.scope - libcontainer container ec01075cabd2d888fd7c6c40bb834e168f73e9aaabd12f0a6e36e0831575c47e. Dec 12 18:13:20.180000 audit[2484]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017e218 a2=98 a3=0 items=0 ppid=2472 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435316330653234656262656166623232303366613561323361393732 Dec 12 18:13:20.181000 audit: BPF prog-id=94 op=UNLOAD Dec 12 18:13:20.181000 audit[2484]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2472 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435316330653234656262656166623232303366613561323361393732 Dec 12 18:13:20.182000 audit: BPF prog-id=93 op=UNLOAD Dec 12 18:13:20.182000 audit[2484]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2472 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435316330653234656262656166623232303366613561323361393732 Dec 12 18:13:20.182000 audit: BPF prog-id=95 op=LOAD Dec 12 18:13:20.182000 audit[2484]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e6e8 a2=98 a3=0 items=0 ppid=2472 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435316330653234656262656166623232303366613561323361393732 Dec 12 18:13:20.197000 audit: BPF prog-id=96 op=LOAD Dec 12 18:13:20.198000 audit: BPF prog-id=97 op=LOAD Dec 12 18:13:20.198000 audit[2522]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066386231623963613432626162313636323836626664316234633163 Dec 12 18:13:20.198000 audit: BPF prog-id=97 op=UNLOAD Dec 12 18:13:20.198000 audit[2522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066386231623963613432626162313636323836626664316234633163 Dec 12 18:13:20.199000 audit: BPF prog-id=98 op=LOAD Dec 12 18:13:20.199000 audit[2522]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066386231623963613432626162313636323836626664316234633163 Dec 12 18:13:20.199000 audit: BPF prog-id=99 op=LOAD Dec 12 18:13:20.199000 audit[2522]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066386231623963613432626162313636323836626664316234633163 Dec 12 18:13:20.199000 audit: BPF prog-id=99 op=UNLOAD Dec 12 18:13:20.199000 audit[2522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066386231623963613432626162313636323836626664316234633163 Dec 12 18:13:20.199000 audit: BPF prog-id=98 op=UNLOAD Dec 12 18:13:20.199000 audit[2522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066386231623963613432626162313636323836626664316234633163 Dec 12 18:13:20.200000 audit: BPF prog-id=100 op=LOAD Dec 12 18:13:20.200000 audit[2522]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2494 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066386231623963613432626162313636323836626664316234633163 Dec 12 18:13:20.219000 audit: BPF prog-id=101 op=LOAD Dec 12 18:13:20.220000 audit: BPF prog-id=102 op=LOAD Dec 12 18:13:20.220000 audit[2549]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2524 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563303130373563616264326438383866643763366334306262383334 Dec 12 18:13:20.221000 audit: BPF prog-id=102 op=UNLOAD Dec 12 18:13:20.221000 audit[2549]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2524 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563303130373563616264326438383866643763366334306262383334 Dec 12 18:13:20.224000 audit: BPF prog-id=103 op=LOAD Dec 12 18:13:20.224000 audit[2549]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2524 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563303130373563616264326438383866643763366334306262383334 Dec 12 18:13:20.224000 audit: BPF prog-id=104 op=LOAD Dec 12 18:13:20.224000 audit[2549]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2524 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563303130373563616264326438383866643763366334306262383334 Dec 12 18:13:20.225000 audit: BPF prog-id=104 op=UNLOAD Dec 12 18:13:20.225000 audit[2549]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2524 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563303130373563616264326438383866643763366334306262383334 Dec 12 18:13:20.226301 kubelet[2433]: E1212 18:13:20.226223 2433 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.28.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-28-21?timeout=10s\": dial tcp 172.234.28.21:6443: connect: connection refused" interval="800ms" Dec 12 18:13:20.225000 audit: BPF prog-id=103 op=UNLOAD Dec 12 18:13:20.225000 audit[2549]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2524 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563303130373563616264326438383866643763366334306262383334 Dec 12 18:13:20.226000 audit: BPF prog-id=105 op=LOAD Dec 12 18:13:20.226000 audit[2549]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2524 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563303130373563616264326438383866643763366334306262383334 Dec 12 18:13:20.260500 containerd[1629]: time="2025-12-12T18:13:20.260313200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-28-21,Uid:7d402a1e1decf8950f6d505b0e31998c,Namespace:kube-system,Attempt:0,} returns sandbox id \"451c0e24ebbeafb2203fa5a23a972457e53310a5f76dc354ed09a7bf88e0b36c\"" Dec 12 18:13:20.264781 kubelet[2433]: E1212 18:13:20.264732 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:20.272361 containerd[1629]: time="2025-12-12T18:13:20.272111410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-28-21,Uid:fabc874faa7d1cceb9a27f2090c29f50,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f8b1b9ca42bab166286bfd1b4c1ca04478ec83f3e0cacbd89a0338d62efab79\"" Dec 12 18:13:20.272720 kubelet[2433]: E1212 18:13:20.272702 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:20.275867 containerd[1629]: time="2025-12-12T18:13:20.275825870Z" level=info msg="CreateContainer within sandbox \"0f8b1b9ca42bab166286bfd1b4c1ca04478ec83f3e0cacbd89a0338d62efab79\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:13:20.276889 containerd[1629]: time="2025-12-12T18:13:20.276112250Z" level=info msg="CreateContainer within sandbox \"451c0e24ebbeafb2203fa5a23a972457e53310a5f76dc354ed09a7bf88e0b36c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:13:20.284623 containerd[1629]: time="2025-12-12T18:13:20.284542500Z" level=info msg="Container 76dfbbf9c3d67e52aedcfc2cedf71a1f3b3cd256c31ffa940aa6ceb3bb98c456: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:13:20.288003 containerd[1629]: time="2025-12-12T18:13:20.287980770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-28-21,Uid:c1b922e9067be7eb88098b58eebafcb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec01075cabd2d888fd7c6c40bb834e168f73e9aaabd12f0a6e36e0831575c47e\"" Dec 12 18:13:20.289758 kubelet[2433]: E1212 18:13:20.289630 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:20.293279 containerd[1629]: time="2025-12-12T18:13:20.293243920Z" level=info msg="CreateContainer within sandbox \"ec01075cabd2d888fd7c6c40bb834e168f73e9aaabd12f0a6e36e0831575c47e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:13:20.293637 containerd[1629]: time="2025-12-12T18:13:20.293526480Z" level=info msg="Container e470b432b9c2eacfd11f188573a5952c9b3d24a541929ae648722fa8df98ce42: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:13:20.300349 containerd[1629]: time="2025-12-12T18:13:20.300315420Z" level=info msg="CreateContainer within sandbox \"0f8b1b9ca42bab166286bfd1b4c1ca04478ec83f3e0cacbd89a0338d62efab79\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"76dfbbf9c3d67e52aedcfc2cedf71a1f3b3cd256c31ffa940aa6ceb3bb98c456\"" Dec 12 18:13:20.301447 containerd[1629]: time="2025-12-12T18:13:20.301424280Z" level=info msg="StartContainer for \"76dfbbf9c3d67e52aedcfc2cedf71a1f3b3cd256c31ffa940aa6ceb3bb98c456\"" Dec 12 18:13:20.302919 containerd[1629]: time="2025-12-12T18:13:20.302881580Z" level=info msg="connecting to shim 76dfbbf9c3d67e52aedcfc2cedf71a1f3b3cd256c31ffa940aa6ceb3bb98c456" address="unix:///run/containerd/s/9b04feb2b8b254be7cfbcd248e1742c16920d21ec030e0a35ad23e818d7706f4" protocol=ttrpc version=3 Dec 12 18:13:20.305342 containerd[1629]: time="2025-12-12T18:13:20.305299360Z" level=info msg="CreateContainer within sandbox \"451c0e24ebbeafb2203fa5a23a972457e53310a5f76dc354ed09a7bf88e0b36c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e470b432b9c2eacfd11f188573a5952c9b3d24a541929ae648722fa8df98ce42\"" Dec 12 18:13:20.306890 containerd[1629]: time="2025-12-12T18:13:20.306855850Z" level=info msg="StartContainer for \"e470b432b9c2eacfd11f188573a5952c9b3d24a541929ae648722fa8df98ce42\"" Dec 12 18:13:20.308890 containerd[1629]: time="2025-12-12T18:13:20.308858390Z" level=info msg="connecting to shim e470b432b9c2eacfd11f188573a5952c9b3d24a541929ae648722fa8df98ce42" address="unix:///run/containerd/s/f75f899500389df73af81427cf94245d30acb78b2a3d49bdadff5096cd0f4689" protocol=ttrpc version=3 Dec 12 18:13:20.309913 containerd[1629]: time="2025-12-12T18:13:20.309874730Z" level=info msg="Container 7060f36c78dd51785ea98927dd5ff372e6b0072d9c2ac7e31cb62e30da2d373e: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:13:20.316866 containerd[1629]: time="2025-12-12T18:13:20.316836280Z" level=info msg="CreateContainer within sandbox \"ec01075cabd2d888fd7c6c40bb834e168f73e9aaabd12f0a6e36e0831575c47e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7060f36c78dd51785ea98927dd5ff372e6b0072d9c2ac7e31cb62e30da2d373e\"" Dec 12 18:13:20.318385 containerd[1629]: time="2025-12-12T18:13:20.317265560Z" level=info msg="StartContainer for \"7060f36c78dd51785ea98927dd5ff372e6b0072d9c2ac7e31cb62e30da2d373e\"" Dec 12 18:13:20.318385 containerd[1629]: time="2025-12-12T18:13:20.318061960Z" level=info msg="connecting to shim 7060f36c78dd51785ea98927dd5ff372e6b0072d9c2ac7e31cb62e30da2d373e" address="unix:///run/containerd/s/7c485a4c44ec22472e6b47095ade7b0b1e3698f923f06b4b9f09b9261d20816b" protocol=ttrpc version=3 Dec 12 18:13:20.323829 kubelet[2433]: E1212 18:13:20.323662 2433 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.28.21:6443/api/v1/namespaces/default/events\": dial tcp 172.234.28.21:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-28-21.18808a602d319b0a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-28-21,UID:172-234-28-21,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-28-21,},FirstTimestamp:2025-12-12 18:13:19.60895361 +0000 UTC m=+0.633679911,LastTimestamp:2025-12-12 18:13:19.60895361 +0000 UTC m=+0.633679911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-28-21,}" Dec 12 18:13:20.337687 systemd[1]: Started cri-containerd-76dfbbf9c3d67e52aedcfc2cedf71a1f3b3cd256c31ffa940aa6ceb3bb98c456.scope - libcontainer container 76dfbbf9c3d67e52aedcfc2cedf71a1f3b3cd256c31ffa940aa6ceb3bb98c456. Dec 12 18:13:20.346675 systemd[1]: Started cri-containerd-e470b432b9c2eacfd11f188573a5952c9b3d24a541929ae648722fa8df98ce42.scope - libcontainer container e470b432b9c2eacfd11f188573a5952c9b3d24a541929ae648722fa8df98ce42. Dec 12 18:13:20.357776 systemd[1]: Started cri-containerd-7060f36c78dd51785ea98927dd5ff372e6b0072d9c2ac7e31cb62e30da2d373e.scope - libcontainer container 7060f36c78dd51785ea98927dd5ff372e6b0072d9c2ac7e31cb62e30da2d373e. Dec 12 18:13:20.375000 audit: BPF prog-id=106 op=LOAD Dec 12 18:13:20.375000 audit: BPF prog-id=107 op=LOAD Dec 12 18:13:20.375000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2472 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534373062343332623963326561636664313166313838353733613539 Dec 12 18:13:20.375000 audit: BPF prog-id=107 op=UNLOAD Dec 12 18:13:20.375000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2472 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534373062343332623963326561636664313166313838353733613539 Dec 12 18:13:20.376000 audit: BPF prog-id=108 op=LOAD Dec 12 18:13:20.376000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2472 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534373062343332623963326561636664313166313838353733613539 Dec 12 18:13:20.376000 audit: BPF prog-id=109 op=LOAD Dec 12 18:13:20.376000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2472 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534373062343332623963326561636664313166313838353733613539 Dec 12 18:13:20.376000 audit: BPF prog-id=109 op=UNLOAD Dec 12 18:13:20.376000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2472 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534373062343332623963326561636664313166313838353733613539 Dec 12 18:13:20.376000 audit: BPF prog-id=108 op=UNLOAD Dec 12 18:13:20.376000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2472 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534373062343332623963326561636664313166313838353733613539 Dec 12 18:13:20.376000 audit: BPF prog-id=110 op=LOAD Dec 12 18:13:20.376000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2472 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534373062343332623963326561636664313166313838353733613539 Dec 12 18:13:20.381000 audit: BPF prog-id=111 op=LOAD Dec 12 18:13:20.382000 audit: BPF prog-id=112 op=LOAD Dec 12 18:13:20.382000 audit[2602]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2494 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736646662626639633364363765353261656463666332636564663731 Dec 12 18:13:20.383000 audit: BPF prog-id=112 op=UNLOAD Dec 12 18:13:20.383000 audit[2602]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736646662626639633364363765353261656463666332636564663731 Dec 12 18:13:20.383000 audit: BPF prog-id=113 op=LOAD Dec 12 18:13:20.383000 audit[2602]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2494 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736646662626639633364363765353261656463666332636564663731 Dec 12 18:13:20.384000 audit: BPF prog-id=114 op=LOAD Dec 12 18:13:20.385000 audit: BPF prog-id=115 op=LOAD Dec 12 18:13:20.384000 audit[2602]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2494 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736646662626639633364363765353261656463666332636564663731 Dec 12 18:13:20.385000 audit: BPF prog-id=114 op=UNLOAD Dec 12 18:13:20.385000 audit[2602]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736646662626639633364363765353261656463666332636564663731 Dec 12 18:13:20.385000 audit: BPF prog-id=113 op=UNLOAD Dec 12 18:13:20.385000 audit[2602]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2494 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736646662626639633364363765353261656463666332636564663731 Dec 12 18:13:20.385000 audit: BPF prog-id=116 op=LOAD Dec 12 18:13:20.385000 audit[2602]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2494 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736646662626639633364363765353261656463666332636564663731 Dec 12 18:13:20.386000 audit: BPF prog-id=117 op=LOAD Dec 12 18:13:20.386000 audit[2615]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2524 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363066333663373864643531373835656139383932376464356666 Dec 12 18:13:20.386000 audit: BPF prog-id=117 op=UNLOAD Dec 12 18:13:20.386000 audit[2615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2524 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363066333663373864643531373835656139383932376464356666 Dec 12 18:13:20.387000 audit: BPF prog-id=118 op=LOAD Dec 12 18:13:20.387000 audit[2615]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2524 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363066333663373864643531373835656139383932376464356666 Dec 12 18:13:20.387000 audit: BPF prog-id=119 op=LOAD Dec 12 18:13:20.387000 audit[2615]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2524 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363066333663373864643531373835656139383932376464356666 Dec 12 18:13:20.387000 audit: BPF prog-id=119 op=UNLOAD Dec 12 18:13:20.387000 audit[2615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2524 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363066333663373864643531373835656139383932376464356666 Dec 12 18:13:20.387000 audit: BPF prog-id=118 op=UNLOAD Dec 12 18:13:20.387000 audit[2615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2524 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363066333663373864643531373835656139383932376464356666 Dec 12 18:13:20.387000 audit: BPF prog-id=120 op=LOAD Dec 12 18:13:20.387000 audit[2615]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2524 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:20.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363066333663373864643531373835656139383932376464356666 Dec 12 18:13:20.406300 kubelet[2433]: I1212 18:13:20.406271 2433 kubelet_node_status.go:75] "Attempting to register node" node="172-234-28-21" Dec 12 18:13:20.407655 kubelet[2433]: E1212 18:13:20.407597 2433 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.28.21:6443/api/v1/nodes\": dial tcp 172.234.28.21:6443: connect: connection refused" node="172-234-28-21" Dec 12 18:13:20.422255 containerd[1629]: time="2025-12-12T18:13:20.422125030Z" level=info msg="StartContainer for \"e470b432b9c2eacfd11f188573a5952c9b3d24a541929ae648722fa8df98ce42\" returns successfully" Dec 12 18:13:20.458093 containerd[1629]: time="2025-12-12T18:13:20.458005290Z" level=info msg="StartContainer for \"7060f36c78dd51785ea98927dd5ff372e6b0072d9c2ac7e31cb62e30da2d373e\" returns successfully" Dec 12 18:13:20.474298 containerd[1629]: time="2025-12-12T18:13:20.474258680Z" level=info msg="StartContainer for \"76dfbbf9c3d67e52aedcfc2cedf71a1f3b3cd256c31ffa940aa6ceb3bb98c456\" returns successfully" Dec 12 18:13:20.662431 kubelet[2433]: E1212 18:13:20.661416 2433 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-28-21\" not found" node="172-234-28-21" Dec 12 18:13:20.663255 kubelet[2433]: E1212 18:13:20.663187 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:20.665607 kubelet[2433]: E1212 18:13:20.664055 2433 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-28-21\" not found" node="172-234-28-21" Dec 12 18:13:20.665764 kubelet[2433]: E1212 18:13:20.665751 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:20.668651 kubelet[2433]: E1212 18:13:20.668523 2433 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-28-21\" not found" node="172-234-28-21" Dec 12 18:13:20.668651 kubelet[2433]: E1212 18:13:20.668608 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:21.211060 kubelet[2433]: I1212 18:13:21.210981 2433 kubelet_node_status.go:75] "Attempting to register node" node="172-234-28-21" Dec 12 18:13:21.673868 kubelet[2433]: E1212 18:13:21.673830 2433 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-28-21\" not found" node="172-234-28-21" Dec 12 18:13:21.674657 kubelet[2433]: E1212 18:13:21.673973 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:21.674657 kubelet[2433]: E1212 18:13:21.674272 2433 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-28-21\" not found" node="172-234-28-21" Dec 12 18:13:21.674657 kubelet[2433]: E1212 18:13:21.674346 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:21.987938 kubelet[2433]: E1212 18:13:21.987771 2433 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-28-21\" not found" node="172-234-28-21" Dec 12 18:13:22.164072 kubelet[2433]: I1212 18:13:22.163986 2433 kubelet_node_status.go:78] "Successfully registered node" node="172-234-28-21" Dec 12 18:13:22.223727 kubelet[2433]: I1212 18:13:22.223666 2433 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:22.229187 kubelet[2433]: E1212 18:13:22.229158 2433 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-28-21\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:22.229187 kubelet[2433]: I1212 18:13:22.229184 2433 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-28-21" Dec 12 18:13:22.230531 kubelet[2433]: E1212 18:13:22.230482 2433 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-28-21\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-28-21" Dec 12 18:13:22.230584 kubelet[2433]: I1212 18:13:22.230560 2433 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-28-21" Dec 12 18:13:22.232634 kubelet[2433]: E1212 18:13:22.232596 2433 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-28-21\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-28-21" Dec 12 18:13:22.610173 kubelet[2433]: I1212 18:13:22.610109 2433 apiserver.go:52] "Watching apiserver" Dec 12 18:13:22.622154 kubelet[2433]: I1212 18:13:22.622113 2433 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:13:22.795012 kubelet[2433]: I1212 18:13:22.794950 2433 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:22.797882 kubelet[2433]: E1212 18:13:22.797829 2433 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-28-21\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:22.798171 kubelet[2433]: E1212 18:13:22.798149 2433 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:23.912560 systemd[1]: Reload requested from client PID 2704 ('systemctl') (unit session-7.scope)... Dec 12 18:13:23.912578 systemd[1]: Reloading... Dec 12 18:13:24.042570 zram_generator::config[2766]: No configuration found. Dec 12 18:13:24.256856 systemd[1]: Reloading finished in 343 ms. Dec 12 18:13:24.284462 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:13:24.293726 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:13:24.294045 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:13:24.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:24.294607 kernel: kauditd_printk_skb: 210 callbacks suppressed Dec 12 18:13:24.294667 kernel: audit: type=1131 audit(1765563204.293:402): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:24.296559 systemd[1]: kubelet.service: Consumed 1.058s CPU time, 131.6M memory peak. Dec 12 18:13:24.301383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:13:24.301000 audit: BPF prog-id=121 op=LOAD Dec 12 18:13:24.305535 kernel: audit: type=1334 audit(1765563204.301:403): prog-id=121 op=LOAD Dec 12 18:13:24.301000 audit: BPF prog-id=85 op=UNLOAD Dec 12 18:13:24.301000 audit: BPF prog-id=122 op=LOAD Dec 12 18:13:24.310479 kernel: audit: type=1334 audit(1765563204.301:404): prog-id=85 op=UNLOAD Dec 12 18:13:24.310579 kernel: audit: type=1334 audit(1765563204.301:405): prog-id=122 op=LOAD Dec 12 18:13:24.310607 kernel: audit: type=1334 audit(1765563204.301:406): prog-id=123 op=LOAD Dec 12 18:13:24.301000 audit: BPF prog-id=123 op=LOAD Dec 12 18:13:24.301000 audit: BPF prog-id=86 op=UNLOAD Dec 12 18:13:24.317714 kernel: audit: type=1334 audit(1765563204.301:407): prog-id=86 op=UNLOAD Dec 12 18:13:24.317790 kernel: audit: type=1334 audit(1765563204.301:408): prog-id=87 op=UNLOAD Dec 12 18:13:24.301000 audit: BPF prog-id=87 op=UNLOAD Dec 12 18:13:24.319964 kernel: audit: type=1334 audit(1765563204.305:409): prog-id=124 op=LOAD Dec 12 18:13:24.305000 audit: BPF prog-id=124 op=LOAD Dec 12 18:13:24.321996 kernel: audit: type=1334 audit(1765563204.305:410): prog-id=88 op=UNLOAD Dec 12 18:13:24.305000 audit: BPF prog-id=88 op=UNLOAD Dec 12 18:13:24.324242 kernel: audit: type=1334 audit(1765563204.305:411): prog-id=125 op=LOAD Dec 12 18:13:24.305000 audit: BPF prog-id=125 op=LOAD Dec 12 18:13:24.305000 audit: BPF prog-id=126 op=LOAD Dec 12 18:13:24.305000 audit: BPF prog-id=89 op=UNLOAD Dec 12 18:13:24.305000 audit: BPF prog-id=90 op=UNLOAD Dec 12 18:13:24.313000 audit: BPF prog-id=127 op=LOAD Dec 12 18:13:24.313000 audit: BPF prog-id=70 op=UNLOAD Dec 12 18:13:24.313000 audit: BPF prog-id=128 op=LOAD Dec 12 18:13:24.313000 audit: BPF prog-id=129 op=LOAD Dec 12 18:13:24.313000 audit: BPF prog-id=71 op=UNLOAD Dec 12 18:13:24.313000 audit: BPF prog-id=72 op=UNLOAD Dec 12 18:13:24.313000 audit: BPF prog-id=130 op=LOAD Dec 12 18:13:24.313000 audit: BPF prog-id=67 op=UNLOAD Dec 12 18:13:24.313000 audit: BPF prog-id=131 op=LOAD Dec 12 18:13:24.313000 audit: BPF prog-id=132 op=LOAD Dec 12 18:13:24.313000 audit: BPF prog-id=68 op=UNLOAD Dec 12 18:13:24.313000 audit: BPF prog-id=69 op=UNLOAD Dec 12 18:13:24.315000 audit: BPF prog-id=133 op=LOAD Dec 12 18:13:24.315000 audit: BPF prog-id=76 op=UNLOAD Dec 12 18:13:24.315000 audit: BPF prog-id=134 op=LOAD Dec 12 18:13:24.315000 audit: BPF prog-id=135 op=LOAD Dec 12 18:13:24.317000 audit: BPF prog-id=77 op=UNLOAD Dec 12 18:13:24.317000 audit: BPF prog-id=78 op=UNLOAD Dec 12 18:13:24.317000 audit: BPF prog-id=136 op=LOAD Dec 12 18:13:24.317000 audit: BPF prog-id=79 op=UNLOAD Dec 12 18:13:24.317000 audit: BPF prog-id=137 op=LOAD Dec 12 18:13:24.317000 audit: BPF prog-id=83 op=UNLOAD Dec 12 18:13:24.320000 audit: BPF prog-id=138 op=LOAD Dec 12 18:13:24.320000 audit: BPF prog-id=73 op=UNLOAD Dec 12 18:13:24.320000 audit: BPF prog-id=139 op=LOAD Dec 12 18:13:24.320000 audit: BPF prog-id=140 op=LOAD Dec 12 18:13:24.320000 audit: BPF prog-id=74 op=UNLOAD Dec 12 18:13:24.320000 audit: BPF prog-id=75 op=UNLOAD Dec 12 18:13:24.321000 audit: BPF prog-id=141 op=LOAD Dec 12 18:13:24.321000 audit: BPF prog-id=80 op=UNLOAD Dec 12 18:13:24.322000 audit: BPF prog-id=142 op=LOAD Dec 12 18:13:24.322000 audit: BPF prog-id=143 op=LOAD Dec 12 18:13:24.322000 audit: BPF prog-id=81 op=UNLOAD Dec 12 18:13:24.322000 audit: BPF prog-id=82 op=UNLOAD Dec 12 18:13:24.322000 audit: BPF prog-id=144 op=LOAD Dec 12 18:13:24.322000 audit: BPF prog-id=84 op=UNLOAD Dec 12 18:13:24.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:24.482608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:13:24.490077 (kubelet)[2802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:13:24.542084 kubelet[2802]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:13:24.542084 kubelet[2802]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:13:24.542084 kubelet[2802]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:13:24.542084 kubelet[2802]: I1212 18:13:24.541476 2802 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:13:24.548168 kubelet[2802]: I1212 18:13:24.548134 2802 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:13:24.548168 kubelet[2802]: I1212 18:13:24.548155 2802 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:13:24.548376 kubelet[2802]: I1212 18:13:24.548351 2802 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:13:24.549610 kubelet[2802]: I1212 18:13:24.549583 2802 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 18:13:24.552996 kubelet[2802]: I1212 18:13:24.552153 2802 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:13:24.556955 kubelet[2802]: I1212 18:13:24.556936 2802 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:13:24.562879 kubelet[2802]: I1212 18:13:24.562472 2802 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:13:24.562879 kubelet[2802]: I1212 18:13:24.562722 2802 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:13:24.562879 kubelet[2802]: I1212 18:13:24.562740 2802 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-28-21","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:13:24.563129 kubelet[2802]: I1212 18:13:24.562887 2802 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:13:24.563129 kubelet[2802]: I1212 18:13:24.562896 2802 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:13:24.563129 kubelet[2802]: I1212 18:13:24.562942 2802 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:13:24.563129 kubelet[2802]: I1212 18:13:24.563074 2802 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:13:24.563129 kubelet[2802]: I1212 18:13:24.563110 2802 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:13:24.563129 kubelet[2802]: I1212 18:13:24.563135 2802 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:13:24.563337 kubelet[2802]: I1212 18:13:24.563149 2802 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:13:24.571439 kubelet[2802]: I1212 18:13:24.571414 2802 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 18:13:24.571870 kubelet[2802]: I1212 18:13:24.571856 2802 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:13:24.572406 kubelet[2802]: I1212 18:13:24.572394 2802 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:13:24.572559 kubelet[2802]: I1212 18:13:24.572548 2802 server.go:1287] "Started kubelet" Dec 12 18:13:24.575054 kubelet[2802]: I1212 18:13:24.574871 2802 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:13:24.575213 kubelet[2802]: I1212 18:13:24.575191 2802 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:13:24.576460 kubelet[2802]: I1212 18:13:24.576442 2802 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:13:24.578067 kubelet[2802]: I1212 18:13:24.578051 2802 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:13:24.578369 kubelet[2802]: I1212 18:13:24.578343 2802 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:13:24.581972 kubelet[2802]: E1212 18:13:24.581957 2802 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:13:24.582362 kubelet[2802]: I1212 18:13:24.582334 2802 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:13:24.589031 kubelet[2802]: I1212 18:13:24.589013 2802 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:13:24.589306 kubelet[2802]: I1212 18:13:24.589293 2802 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:13:24.590002 kubelet[2802]: I1212 18:13:24.589640 2802 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:13:24.593933 kubelet[2802]: I1212 18:13:24.593904 2802 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:13:24.593933 kubelet[2802]: I1212 18:13:24.593925 2802 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:13:24.594034 kubelet[2802]: I1212 18:13:24.593998 2802 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:13:24.600439 kubelet[2802]: I1212 18:13:24.600417 2802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:13:24.611915 kubelet[2802]: I1212 18:13:24.611888 2802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:13:24.612007 kubelet[2802]: I1212 18:13:24.611997 2802 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:13:24.612077 kubelet[2802]: I1212 18:13:24.612066 2802 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:13:24.612552 kubelet[2802]: I1212 18:13:24.612120 2802 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:13:24.612552 kubelet[2802]: E1212 18:13:24.612172 2802 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:13:24.669681 kubelet[2802]: I1212 18:13:24.669658 2802 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:13:24.669861 kubelet[2802]: I1212 18:13:24.669850 2802 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:13:24.669946 kubelet[2802]: I1212 18:13:24.669936 2802 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:13:24.670134 kubelet[2802]: I1212 18:13:24.670119 2802 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:13:24.670275 kubelet[2802]: I1212 18:13:24.670234 2802 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:13:24.670323 kubelet[2802]: I1212 18:13:24.670315 2802 policy_none.go:49] "None policy: Start" Dec 12 18:13:24.670536 kubelet[2802]: I1212 18:13:24.670356 2802 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:13:24.670536 kubelet[2802]: I1212 18:13:24.670369 2802 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:13:24.670536 kubelet[2802]: I1212 18:13:24.670468 2802 state_mem.go:75] "Updated machine memory state" Dec 12 18:13:24.676171 kubelet[2802]: I1212 18:13:24.676151 2802 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:13:24.677633 kubelet[2802]: I1212 18:13:24.676765 2802 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:13:24.677633 kubelet[2802]: I1212 18:13:24.676778 2802 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:13:24.679371 kubelet[2802]: I1212 18:13:24.679359 2802 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:13:24.682360 kubelet[2802]: E1212 18:13:24.682344 2802 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:13:24.713287 kubelet[2802]: I1212 18:13:24.713235 2802 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-28-21" Dec 12 18:13:24.713420 kubelet[2802]: I1212 18:13:24.713404 2802 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-28-21" Dec 12 18:13:24.713584 kubelet[2802]: I1212 18:13:24.713563 2802 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:24.789481 kubelet[2802]: I1212 18:13:24.789443 2802 kubelet_node_status.go:75] "Attempting to register node" node="172-234-28-21" Dec 12 18:13:24.790392 kubelet[2802]: I1212 18:13:24.790359 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d402a1e1decf8950f6d505b0e31998c-k8s-certs\") pod \"kube-apiserver-172-234-28-21\" (UID: \"7d402a1e1decf8950f6d505b0e31998c\") " pod="kube-system/kube-apiserver-172-234-28-21" Dec 12 18:13:24.790392 kubelet[2802]: I1212 18:13:24.790390 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fabc874faa7d1cceb9a27f2090c29f50-ca-certs\") pod \"kube-controller-manager-172-234-28-21\" (UID: \"fabc874faa7d1cceb9a27f2090c29f50\") " pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:24.790491 kubelet[2802]: I1212 18:13:24.790408 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fabc874faa7d1cceb9a27f2090c29f50-k8s-certs\") pod \"kube-controller-manager-172-234-28-21\" (UID: \"fabc874faa7d1cceb9a27f2090c29f50\") " pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:24.790491 kubelet[2802]: I1212 18:13:24.790425 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fabc874faa7d1cceb9a27f2090c29f50-kubeconfig\") pod \"kube-controller-manager-172-234-28-21\" (UID: \"fabc874faa7d1cceb9a27f2090c29f50\") " pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:24.790491 kubelet[2802]: I1212 18:13:24.790447 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fabc874faa7d1cceb9a27f2090c29f50-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-28-21\" (UID: \"fabc874faa7d1cceb9a27f2090c29f50\") " pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:24.790491 kubelet[2802]: I1212 18:13:24.790464 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1b922e9067be7eb88098b58eebafcb2-kubeconfig\") pod \"kube-scheduler-172-234-28-21\" (UID: \"c1b922e9067be7eb88098b58eebafcb2\") " pod="kube-system/kube-scheduler-172-234-28-21" Dec 12 18:13:24.790491 kubelet[2802]: I1212 18:13:24.790480 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d402a1e1decf8950f6d505b0e31998c-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-28-21\" (UID: \"7d402a1e1decf8950f6d505b0e31998c\") " pod="kube-system/kube-apiserver-172-234-28-21" Dec 12 18:13:24.790636 kubelet[2802]: I1212 18:13:24.790495 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fabc874faa7d1cceb9a27f2090c29f50-flexvolume-dir\") pod \"kube-controller-manager-172-234-28-21\" (UID: \"fabc874faa7d1cceb9a27f2090c29f50\") " pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:24.790636 kubelet[2802]: I1212 18:13:24.790531 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d402a1e1decf8950f6d505b0e31998c-ca-certs\") pod \"kube-apiserver-172-234-28-21\" (UID: \"7d402a1e1decf8950f6d505b0e31998c\") " pod="kube-system/kube-apiserver-172-234-28-21" Dec 12 18:13:24.798045 kubelet[2802]: I1212 18:13:24.797958 2802 kubelet_node_status.go:124] "Node was previously registered" node="172-234-28-21" Dec 12 18:13:24.798045 kubelet[2802]: I1212 18:13:24.798020 2802 kubelet_node_status.go:78] "Successfully registered node" node="172-234-28-21" Dec 12 18:13:25.018065 kubelet[2802]: E1212 18:13:25.017799 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:25.019451 kubelet[2802]: E1212 18:13:25.019412 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:25.021871 kubelet[2802]: E1212 18:13:25.021845 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:25.564503 kubelet[2802]: I1212 18:13:25.564446 2802 apiserver.go:52] "Watching apiserver" Dec 12 18:13:25.590408 kubelet[2802]: I1212 18:13:25.590368 2802 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:13:25.650919 kubelet[2802]: I1212 18:13:25.650731 2802 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:25.651671 kubelet[2802]: I1212 18:13:25.651270 2802 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-28-21" Dec 12 18:13:25.651930 kubelet[2802]: I1212 18:13:25.651917 2802 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-28-21" Dec 12 18:13:25.662998 kubelet[2802]: E1212 18:13:25.662959 2802 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-28-21\" already exists" pod="kube-system/kube-apiserver-172-234-28-21" Dec 12 18:13:25.664922 kubelet[2802]: E1212 18:13:25.664472 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:25.665325 kubelet[2802]: E1212 18:13:25.665309 2802 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-28-21\" already exists" pod="kube-system/kube-controller-manager-172-234-28-21" Dec 12 18:13:25.665539 kubelet[2802]: E1212 18:13:25.665524 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:25.665663 kubelet[2802]: E1212 18:13:25.665394 2802 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-28-21\" already exists" pod="kube-system/kube-scheduler-172-234-28-21" Dec 12 18:13:25.665989 kubelet[2802]: E1212 18:13:25.665976 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:25.679657 kubelet[2802]: I1212 18:13:25.679557 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-28-21" podStartSLOduration=1.67952755 podStartE2EDuration="1.67952755s" podCreationTimestamp="2025-12-12 18:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:13:25.67925086 +0000 UTC m=+1.182060011" watchObservedRunningTime="2025-12-12 18:13:25.67952755 +0000 UTC m=+1.182336701" Dec 12 18:13:25.690980 kubelet[2802]: I1212 18:13:25.690877 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-28-21" podStartSLOduration=1.6908670799999999 podStartE2EDuration="1.69086708s" podCreationTimestamp="2025-12-12 18:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:13:25.68573346 +0000 UTC m=+1.188542611" watchObservedRunningTime="2025-12-12 18:13:25.69086708 +0000 UTC m=+1.193676231" Dec 12 18:13:26.653337 kubelet[2802]: E1212 18:13:26.653280 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:26.656525 kubelet[2802]: E1212 18:13:26.654786 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:26.656525 kubelet[2802]: E1212 18:13:26.655104 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:27.600247 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 18:13:27.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:27.614000 audit: BPF prog-id=121 op=UNLOAD Dec 12 18:13:29.155381 kubelet[2802]: I1212 18:13:29.155267 2802 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:13:29.156950 containerd[1629]: time="2025-12-12T18:13:29.156286850Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:13:29.157749 kubelet[2802]: I1212 18:13:29.156411 2802 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:13:30.040336 kubelet[2802]: I1212 18:13:30.040225 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-28-21" podStartSLOduration=6.04020839 podStartE2EDuration="6.04020839s" podCreationTimestamp="2025-12-12 18:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:13:25.69109586 +0000 UTC m=+1.193905011" watchObservedRunningTime="2025-12-12 18:13:30.04020839 +0000 UTC m=+5.543017551" Dec 12 18:13:30.052972 systemd[1]: Created slice kubepods-besteffort-podf694e465_86e2_4883_b9fa_453fcf308471.slice - libcontainer container kubepods-besteffort-podf694e465_86e2_4883_b9fa_453fcf308471.slice. Dec 12 18:13:30.128325 kubelet[2802]: I1212 18:13:30.128286 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f694e465-86e2-4883-b9fa-453fcf308471-lib-modules\") pod \"kube-proxy-w6hbk\" (UID: \"f694e465-86e2-4883-b9fa-453fcf308471\") " pod="kube-system/kube-proxy-w6hbk" Dec 12 18:13:30.128325 kubelet[2802]: I1212 18:13:30.128322 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt5hq\" (UniqueName: \"kubernetes.io/projected/f694e465-86e2-4883-b9fa-453fcf308471-kube-api-access-lt5hq\") pod \"kube-proxy-w6hbk\" (UID: \"f694e465-86e2-4883-b9fa-453fcf308471\") " pod="kube-system/kube-proxy-w6hbk" Dec 12 18:13:30.128325 kubelet[2802]: I1212 18:13:30.128343 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f694e465-86e2-4883-b9fa-453fcf308471-kube-proxy\") pod \"kube-proxy-w6hbk\" (UID: \"f694e465-86e2-4883-b9fa-453fcf308471\") " pod="kube-system/kube-proxy-w6hbk" Dec 12 18:13:30.128589 kubelet[2802]: I1212 18:13:30.128359 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f694e465-86e2-4883-b9fa-453fcf308471-xtables-lock\") pod \"kube-proxy-w6hbk\" (UID: \"f694e465-86e2-4883-b9fa-453fcf308471\") " pod="kube-system/kube-proxy-w6hbk" Dec 12 18:13:30.329750 kubelet[2802]: I1212 18:13:30.329558 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/409ce6fc-73ac-4200-abf6-060f68917b84-var-lib-calico\") pod \"tigera-operator-7dcd859c48-l6q5b\" (UID: \"409ce6fc-73ac-4200-abf6-060f68917b84\") " pod="tigera-operator/tigera-operator-7dcd859c48-l6q5b" Dec 12 18:13:30.329750 kubelet[2802]: I1212 18:13:30.329621 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk7vh\" (UniqueName: \"kubernetes.io/projected/409ce6fc-73ac-4200-abf6-060f68917b84-kube-api-access-sk7vh\") pod \"tigera-operator-7dcd859c48-l6q5b\" (UID: \"409ce6fc-73ac-4200-abf6-060f68917b84\") " pod="tigera-operator/tigera-operator-7dcd859c48-l6q5b" Dec 12 18:13:30.331456 systemd[1]: Created slice kubepods-besteffort-pod409ce6fc_73ac_4200_abf6_060f68917b84.slice - libcontainer container kubepods-besteffort-pod409ce6fc_73ac_4200_abf6_060f68917b84.slice. Dec 12 18:13:30.360820 kubelet[2802]: E1212 18:13:30.360786 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:30.361630 containerd[1629]: time="2025-12-12T18:13:30.361598130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w6hbk,Uid:f694e465-86e2-4883-b9fa-453fcf308471,Namespace:kube-system,Attempt:0,}" Dec 12 18:13:30.379735 containerd[1629]: time="2025-12-12T18:13:30.379659800Z" level=info msg="connecting to shim e075fc73f65e880b7a85ee6e150614d561018ec384491b07f6fe3ce1275323a8" address="unix:///run/containerd/s/93610ec040173846d4fce8148865b8da5d1fa305dc52dab7be86cee67f26de47" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:13:30.408976 systemd[1]: Started cri-containerd-e075fc73f65e880b7a85ee6e150614d561018ec384491b07f6fe3ce1275323a8.scope - libcontainer container e075fc73f65e880b7a85ee6e150614d561018ec384491b07f6fe3ce1275323a8. Dec 12 18:13:30.426588 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 12 18:13:30.426700 kernel: audit: type=1334 audit(1765563210.422:454): prog-id=145 op=LOAD Dec 12 18:13:30.422000 audit: BPF prog-id=145 op=LOAD Dec 12 18:13:30.426000 audit: BPF prog-id=146 op=LOAD Dec 12 18:13:30.434574 kernel: audit: type=1334 audit(1765563210.426:455): prog-id=146 op=LOAD Dec 12 18:13:30.434635 kernel: audit: type=1300 audit(1765563210.426:455): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2858 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.426000 audit[2870]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2858 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.448128 kernel: audit: type=1327 audit(1765563210.426:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373566633733663635653838306237613835656536653135303631 Dec 12 18:13:30.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373566633733663635653838306237613835656536653135303631 Dec 12 18:13:30.426000 audit: BPF prog-id=146 op=UNLOAD Dec 12 18:13:30.465553 kernel: audit: type=1334 audit(1765563210.426:456): prog-id=146 op=UNLOAD Dec 12 18:13:30.465617 kernel: audit: type=1300 audit(1765563210.426:456): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2858 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.426000 audit[2870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2858 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.474292 kernel: audit: type=1327 audit(1765563210.426:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373566633733663635653838306237613835656536653135303631 Dec 12 18:13:30.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373566633733663635653838306237613835656536653135303631 Dec 12 18:13:30.476728 kernel: audit: type=1334 audit(1765563210.426:457): prog-id=147 op=LOAD Dec 12 18:13:30.476990 kernel: audit: type=1300 audit(1765563210.426:457): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2858 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.426000 audit: BPF prog-id=147 op=LOAD Dec 12 18:13:30.426000 audit[2870]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2858 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.478014 containerd[1629]: time="2025-12-12T18:13:30.477923680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w6hbk,Uid:f694e465-86e2-4883-b9fa-453fcf308471,Namespace:kube-system,Attempt:0,} returns sandbox id \"e075fc73f65e880b7a85ee6e150614d561018ec384491b07f6fe3ce1275323a8\"" Dec 12 18:13:30.484405 kubelet[2802]: E1212 18:13:30.479667 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:30.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373566633733663635653838306237613835656536653135303631 Dec 12 18:13:30.492283 containerd[1629]: time="2025-12-12T18:13:30.485110560Z" level=info msg="CreateContainer within sandbox \"e075fc73f65e880b7a85ee6e150614d561018ec384491b07f6fe3ce1275323a8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:13:30.426000 audit: BPF prog-id=148 op=LOAD Dec 12 18:13:30.426000 audit[2870]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2858 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.492554 kernel: audit: type=1327 audit(1765563210.426:457): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373566633733663635653838306237613835656536653135303631 Dec 12 18:13:30.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373566633733663635653838306237613835656536653135303631 Dec 12 18:13:30.426000 audit: BPF prog-id=148 op=UNLOAD Dec 12 18:13:30.426000 audit[2870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2858 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373566633733663635653838306237613835656536653135303631 Dec 12 18:13:30.426000 audit: BPF prog-id=147 op=UNLOAD Dec 12 18:13:30.426000 audit[2870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2858 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373566633733663635653838306237613835656536653135303631 Dec 12 18:13:30.426000 audit: BPF prog-id=149 op=LOAD Dec 12 18:13:30.426000 audit[2870]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2858 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530373566633733663635653838306237613835656536653135303631 Dec 12 18:13:30.499418 containerd[1629]: time="2025-12-12T18:13:30.499366270Z" level=info msg="Container f13228f637435f3f91827971648059f28e5fb853024ae6459de2ad2903309931: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:13:30.505041 containerd[1629]: time="2025-12-12T18:13:30.505005470Z" level=info msg="CreateContainer within sandbox \"e075fc73f65e880b7a85ee6e150614d561018ec384491b07f6fe3ce1275323a8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f13228f637435f3f91827971648059f28e5fb853024ae6459de2ad2903309931\"" Dec 12 18:13:30.505995 containerd[1629]: time="2025-12-12T18:13:30.505976950Z" level=info msg="StartContainer for \"f13228f637435f3f91827971648059f28e5fb853024ae6459de2ad2903309931\"" Dec 12 18:13:30.507253 containerd[1629]: time="2025-12-12T18:13:30.507198090Z" level=info msg="connecting to shim f13228f637435f3f91827971648059f28e5fb853024ae6459de2ad2903309931" address="unix:///run/containerd/s/93610ec040173846d4fce8148865b8da5d1fa305dc52dab7be86cee67f26de47" protocol=ttrpc version=3 Dec 12 18:13:30.539719 systemd[1]: Started cri-containerd-f13228f637435f3f91827971648059f28e5fb853024ae6459de2ad2903309931.scope - libcontainer container f13228f637435f3f91827971648059f28e5fb853024ae6459de2ad2903309931. Dec 12 18:13:30.585000 audit: BPF prog-id=150 op=LOAD Dec 12 18:13:30.585000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2858 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631333232386636333734333566336639313832373937313634383035 Dec 12 18:13:30.585000 audit: BPF prog-id=151 op=LOAD Dec 12 18:13:30.585000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2858 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631333232386636333734333566336639313832373937313634383035 Dec 12 18:13:30.585000 audit: BPF prog-id=151 op=UNLOAD Dec 12 18:13:30.585000 audit[2896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2858 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631333232386636333734333566336639313832373937313634383035 Dec 12 18:13:30.585000 audit: BPF prog-id=150 op=UNLOAD Dec 12 18:13:30.585000 audit[2896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2858 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631333232386636333734333566336639313832373937313634383035 Dec 12 18:13:30.585000 audit: BPF prog-id=152 op=LOAD Dec 12 18:13:30.585000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2858 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631333232386636333734333566336639313832373937313634383035 Dec 12 18:13:30.611811 containerd[1629]: time="2025-12-12T18:13:30.611709970Z" level=info msg="StartContainer for \"f13228f637435f3f91827971648059f28e5fb853024ae6459de2ad2903309931\" returns successfully" Dec 12 18:13:30.636613 containerd[1629]: time="2025-12-12T18:13:30.634775920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-l6q5b,Uid:409ce6fc-73ac-4200-abf6-060f68917b84,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:13:30.649657 containerd[1629]: time="2025-12-12T18:13:30.649617170Z" level=info msg="connecting to shim 874658158405c6bd3a38730165354c191da3e5f90f7c242b2c11a6700425c79b" address="unix:///run/containerd/s/0efb67adbcb23b7ea17ed116fafd99c296934a830e09e03ac8a3a46cb6515c8a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:13:30.671569 kubelet[2802]: E1212 18:13:30.670497 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:30.682664 kubelet[2802]: I1212 18:13:30.682626 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w6hbk" podStartSLOduration=0.68258932 podStartE2EDuration="682.58932ms" podCreationTimestamp="2025-12-12 18:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:13:30.68255415 +0000 UTC m=+6.185363301" watchObservedRunningTime="2025-12-12 18:13:30.68258932 +0000 UTC m=+6.185398471" Dec 12 18:13:30.692830 systemd[1]: Started cri-containerd-874658158405c6bd3a38730165354c191da3e5f90f7c242b2c11a6700425c79b.scope - libcontainer container 874658158405c6bd3a38730165354c191da3e5f90f7c242b2c11a6700425c79b. Dec 12 18:13:30.706000 audit: BPF prog-id=153 op=LOAD Dec 12 18:13:30.706000 audit: BPF prog-id=154 op=LOAD Dec 12 18:13:30.706000 audit[2950]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe238 a2=98 a3=0 items=0 ppid=2935 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343635383135383430356336626433613338373330313635333534 Dec 12 18:13:30.707000 audit: BPF prog-id=154 op=UNLOAD Dec 12 18:13:30.707000 audit[2950]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2935 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343635383135383430356336626433613338373330313635333534 Dec 12 18:13:30.707000 audit: BPF prog-id=155 op=LOAD Dec 12 18:13:30.707000 audit[2950]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe488 a2=98 a3=0 items=0 ppid=2935 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343635383135383430356336626433613338373330313635333534 Dec 12 18:13:30.707000 audit: BPF prog-id=156 op=LOAD Dec 12 18:13:30.707000 audit[2950]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001fe218 a2=98 a3=0 items=0 ppid=2935 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343635383135383430356336626433613338373330313635333534 Dec 12 18:13:30.707000 audit: BPF prog-id=156 op=UNLOAD Dec 12 18:13:30.707000 audit[2950]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2935 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343635383135383430356336626433613338373330313635333534 Dec 12 18:13:30.707000 audit: BPF prog-id=155 op=UNLOAD Dec 12 18:13:30.707000 audit[2950]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2935 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343635383135383430356336626433613338373330313635333534 Dec 12 18:13:30.707000 audit: BPF prog-id=157 op=LOAD Dec 12 18:13:30.707000 audit[2950]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe6e8 a2=98 a3=0 items=0 ppid=2935 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837343635383135383430356336626433613338373330313635333534 Dec 12 18:13:30.750715 containerd[1629]: time="2025-12-12T18:13:30.750479080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-l6q5b,Uid:409ce6fc-73ac-4200-abf6-060f68917b84,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"874658158405c6bd3a38730165354c191da3e5f90f7c242b2c11a6700425c79b\"" Dec 12 18:13:30.753666 containerd[1629]: time="2025-12-12T18:13:30.753637200Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:13:30.770000 audit[3006]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:30.770000 audit[3005]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.770000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2239f460 a2=0 a3=7ffc2239f44c items=0 ppid=2909 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 18:13:30.770000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5d8d2ec0 a2=0 a3=7ffe5d8d2eac items=0 ppid=2909 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 18:13:30.773000 audit[3008]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:30.773000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6da27420 a2=0 a3=7ffc6da2740c items=0 ppid=2909 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 18:13:30.773000 audit[3007]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.773000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda6b0cb90 a2=0 a3=7ffda6b0cb7c items=0 ppid=2909 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 18:13:30.775000 audit[3010]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:30.775000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdf28ff90 a2=0 a3=7ffcdf28ff7c items=0 ppid=2909 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.775000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 18:13:30.775000 audit[3009]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.775000 audit[3009]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe11c2c950 a2=0 a3=7ffe11c2c93c items=0 ppid=2909 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.775000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 18:13:30.880000 audit[3012]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.880000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdfaff7210 a2=0 a3=7ffdfaff71fc items=0 ppid=2909 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 18:13:30.884000 audit[3014]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.884000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff353e3710 a2=0 a3=7fff353e36fc items=0 ppid=2909 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 18:13:30.889000 audit[3017]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.889000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdec508490 a2=0 a3=7ffdec50847c items=0 ppid=2909 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.889000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 18:13:30.891000 audit[3018]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.891000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc7cfc250 a2=0 a3=7fffc7cfc23c items=0 ppid=2909 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.891000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 18:13:30.894000 audit[3020]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.894000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe364679b0 a2=0 a3=7ffe3646799c items=0 ppid=2909 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.894000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 18:13:30.895000 audit[3021]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.895000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe821d5350 a2=0 a3=7ffe821d533c items=0 ppid=2909 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 18:13:30.899000 audit[3023]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.899000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc55a2a310 a2=0 a3=7ffc55a2a2fc items=0 ppid=2909 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 18:13:30.904000 audit[3026]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.904000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffff1d6e5e0 a2=0 a3=7ffff1d6e5cc items=0 ppid=2909 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.904000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 18:13:30.906000 audit[3027]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.906000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd18a6e50 a2=0 a3=7ffdd18a6e3c items=0 ppid=2909 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.906000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 18:13:30.909000 audit[3029]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.909000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff715e6780 a2=0 a3=7fff715e676c items=0 ppid=2909 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 18:13:30.910000 audit[3030]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.910000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd63efff80 a2=0 a3=7ffd63efff6c items=0 ppid=2909 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 18:13:30.914000 audit[3032]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.914000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2188f1c0 a2=0 a3=7fff2188f1ac items=0 ppid=2909 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:13:30.918000 audit[3035]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.918000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee17f94d0 a2=0 a3=7ffee17f94bc items=0 ppid=2909 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.918000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:13:30.926000 audit[3038]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.926000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb3384660 a2=0 a3=7ffcb338464c items=0 ppid=2909 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.926000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 18:13:30.932000 audit[3039]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.932000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeba7f5710 a2=0 a3=7ffeba7f56fc items=0 ppid=2909 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 18:13:30.936000 audit[3041]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.936000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc92ddc3c0 a2=0 a3=7ffc92ddc3ac items=0 ppid=2909 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:13:30.941000 audit[3044]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.941000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc551d7fb0 a2=0 a3=7ffc551d7f9c items=0 ppid=2909 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.941000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:13:30.943000 audit[3045]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.943000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeba362b10 a2=0 a3=7ffeba362afc items=0 ppid=2909 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 18:13:30.946000 audit[3047]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 18:13:30.946000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffddbe87400 a2=0 a3=7ffddbe873ec items=0 ppid=2909 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.946000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 18:13:30.970000 audit[3053]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:30.970000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc91a732e0 a2=0 a3=7ffc91a732cc items=0 ppid=2909 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:30.980000 audit[3053]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:30.980000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc91a732e0 a2=0 a3=7ffc91a732cc items=0 ppid=2909 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:30.982000 audit[3058]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:30.982000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe7a3f24c0 a2=0 a3=7ffe7a3f24ac items=0 ppid=2909 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 18:13:30.986000 audit[3060]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:30.986000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe4049ac70 a2=0 a3=7ffe4049ac5c items=0 ppid=2909 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 18:13:30.992000 audit[3063]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:30.992000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe5f5747b0 a2=0 a3=7ffe5f57479c items=0 ppid=2909 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.992000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 18:13:30.993000 audit[3064]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:30.993000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff09ae94b0 a2=0 a3=7fff09ae949c items=0 ppid=2909 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 18:13:30.996000 audit[3066]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:30.996000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc2f60a7a0 a2=0 a3=7ffc2f60a78c items=0 ppid=2909 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.996000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 18:13:30.998000 audit[3067]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:30.998000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3cc707f0 a2=0 a3=7ffc3cc707dc items=0 ppid=2909 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:30.998000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 18:13:31.002000 audit[3069]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3069 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.002000 audit[3069]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc34ab79a0 a2=0 a3=7ffc34ab798c items=0 ppid=2909 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 18:13:31.006000 audit[3072]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.006000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd9b6308d0 a2=0 a3=7ffd9b6308bc items=0 ppid=2909 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.006000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 18:13:31.008000 audit[3073]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.008000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc159f2720 a2=0 a3=7ffc159f270c items=0 ppid=2909 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.008000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 18:13:31.011000 audit[3075]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.011000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc61c9cad0 a2=0 a3=7ffc61c9cabc items=0 ppid=2909 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 18:13:31.013000 audit[3076]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.013000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd386cd0a0 a2=0 a3=7ffd386cd08c items=0 ppid=2909 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.013000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 18:13:31.016000 audit[3078]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.016000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc42a9e160 a2=0 a3=7ffc42a9e14c items=0 ppid=2909 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.016000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 18:13:31.021000 audit[3081]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.021000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf3a26670 a2=0 a3=7ffcf3a2665c items=0 ppid=2909 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.021000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 18:13:31.026000 audit[3084]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.026000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd87cb18c0 a2=0 a3=7ffd87cb18ac items=0 ppid=2909 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.026000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 18:13:31.027000 audit[3085]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.027000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff21ebdf00 a2=0 a3=7fff21ebdeec items=0 ppid=2909 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.027000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 18:13:31.031000 audit[3087]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.031000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffebf19edf0 a2=0 a3=7ffebf19eddc items=0 ppid=2909 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:13:31.037000 audit[3090]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.037000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffdc3f1f30 a2=0 a3=7fffdc3f1f1c items=0 ppid=2909 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 18:13:31.039000 audit[3091]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.039000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb51755e0 a2=0 a3=7ffdb51755cc items=0 ppid=2909 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 18:13:31.042000 audit[3093]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.042000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffca067aa70 a2=0 a3=7ffca067aa5c items=0 ppid=2909 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 18:13:31.047000 audit[3094]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.047000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff760befa0 a2=0 a3=7fff760bef8c items=0 ppid=2909 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 18:13:31.050000 audit[3096]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.050000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcb6dd5ff0 a2=0 a3=7ffcb6dd5fdc items=0 ppid=2909 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:13:31.055000 audit[3099]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 18:13:31.055000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdf5101630 a2=0 a3=7ffdf510161c items=0 ppid=2909 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.055000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 18:13:31.059000 audit[3101]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 18:13:31.059000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe8573e3a0 a2=0 a3=7ffe8573e38c items=0 ppid=2909 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.059000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:31.060000 audit[3101]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 18:13:31.060000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe8573e3a0 a2=0 a3=7ffe8573e38c items=0 ppid=2909 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:31.060000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:31.384243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4252326349.mount: Deactivated successfully. Dec 12 18:13:32.304003 containerd[1629]: time="2025-12-12T18:13:32.303947580Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:32.305027 containerd[1629]: time="2025-12-12T18:13:32.304894970Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 12 18:13:32.305572 containerd[1629]: time="2025-12-12T18:13:32.305548880Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:32.306985 containerd[1629]: time="2025-12-12T18:13:32.306958180Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:32.307641 containerd[1629]: time="2025-12-12T18:13:32.307617290Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.55395301s" Dec 12 18:13:32.307713 containerd[1629]: time="2025-12-12T18:13:32.307699110Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:13:32.310907 containerd[1629]: time="2025-12-12T18:13:32.310884670Z" level=info msg="CreateContainer within sandbox \"874658158405c6bd3a38730165354c191da3e5f90f7c242b2c11a6700425c79b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:13:32.317545 containerd[1629]: time="2025-12-12T18:13:32.315707730Z" level=info msg="Container 9d345d832d079f01a71c9d9f4a72f284424cb27a5b4359d21744acb7cea709b2: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:13:32.324385 containerd[1629]: time="2025-12-12T18:13:32.324345100Z" level=info msg="CreateContainer within sandbox \"874658158405c6bd3a38730165354c191da3e5f90f7c242b2c11a6700425c79b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9d345d832d079f01a71c9d9f4a72f284424cb27a5b4359d21744acb7cea709b2\"" Dec 12 18:13:32.326298 containerd[1629]: time="2025-12-12T18:13:32.326265000Z" level=info msg="StartContainer for \"9d345d832d079f01a71c9d9f4a72f284424cb27a5b4359d21744acb7cea709b2\"" Dec 12 18:13:32.327275 containerd[1629]: time="2025-12-12T18:13:32.327244690Z" level=info msg="connecting to shim 9d345d832d079f01a71c9d9f4a72f284424cb27a5b4359d21744acb7cea709b2" address="unix:///run/containerd/s/0efb67adbcb23b7ea17ed116fafd99c296934a830e09e03ac8a3a46cb6515c8a" protocol=ttrpc version=3 Dec 12 18:13:32.356669 systemd[1]: Started cri-containerd-9d345d832d079f01a71c9d9f4a72f284424cb27a5b4359d21744acb7cea709b2.scope - libcontainer container 9d345d832d079f01a71c9d9f4a72f284424cb27a5b4359d21744acb7cea709b2. Dec 12 18:13:32.371000 audit: BPF prog-id=158 op=LOAD Dec 12 18:13:32.371000 audit: BPF prog-id=159 op=LOAD Dec 12 18:13:32.371000 audit[3110]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2935 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:32.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964333435643833326430373966303161373163396439663461373266 Dec 12 18:13:32.371000 audit: BPF prog-id=159 op=UNLOAD Dec 12 18:13:32.371000 audit[3110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2935 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:32.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964333435643833326430373966303161373163396439663461373266 Dec 12 18:13:32.371000 audit: BPF prog-id=160 op=LOAD Dec 12 18:13:32.371000 audit[3110]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2935 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:32.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964333435643833326430373966303161373163396439663461373266 Dec 12 18:13:32.371000 audit: BPF prog-id=161 op=LOAD Dec 12 18:13:32.371000 audit[3110]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2935 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:32.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964333435643833326430373966303161373163396439663461373266 Dec 12 18:13:32.372000 audit: BPF prog-id=161 op=UNLOAD Dec 12 18:13:32.372000 audit[3110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2935 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:32.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964333435643833326430373966303161373163396439663461373266 Dec 12 18:13:32.372000 audit: BPF prog-id=160 op=UNLOAD Dec 12 18:13:32.372000 audit[3110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2935 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:32.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964333435643833326430373966303161373163396439663461373266 Dec 12 18:13:32.372000 audit: BPF prog-id=162 op=LOAD Dec 12 18:13:32.372000 audit[3110]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2935 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:32.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964333435643833326430373966303161373163396439663461373266 Dec 12 18:13:32.394865 containerd[1629]: time="2025-12-12T18:13:32.394798330Z" level=info msg="StartContainer for \"9d345d832d079f01a71c9d9f4a72f284424cb27a5b4359d21744acb7cea709b2\" returns successfully" Dec 12 18:13:33.264145 kubelet[2802]: E1212 18:13:33.264065 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:33.277362 kubelet[2802]: I1212 18:13:33.276948 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-l6q5b" podStartSLOduration=1.720709 podStartE2EDuration="3.2769306s" podCreationTimestamp="2025-12-12 18:13:30 +0000 UTC" firstStartedPulling="2025-12-12 18:13:30.75239889 +0000 UTC m=+6.255208041" lastFinishedPulling="2025-12-12 18:13:32.30862049 +0000 UTC m=+7.811429641" observedRunningTime="2025-12-12 18:13:32.68749544 +0000 UTC m=+8.190304591" watchObservedRunningTime="2025-12-12 18:13:33.2769306 +0000 UTC m=+8.779739751" Dec 12 18:13:33.681433 kubelet[2802]: E1212 18:13:33.681169 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:34.218203 kubelet[2802]: E1212 18:13:34.217856 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:34.687134 kubelet[2802]: E1212 18:13:34.686945 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:34.690101 kubelet[2802]: E1212 18:13:34.690083 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:35.205543 kubelet[2802]: E1212 18:13:35.205317 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:37.191120 systemd-resolved[1298]: Clock change detected. Flushing caches. Dec 12 18:13:37.191331 systemd-timesyncd[1540]: Contacted time server [2602:81c:1000:2::200]:123 (2.flatcar.pool.ntp.org). Dec 12 18:13:37.191383 systemd-timesyncd[1540]: Initial clock synchronization to Fri 2025-12-12 18:13:37.190955 UTC. Dec 12 18:13:38.760071 sudo[1877]: pam_unix(sudo:session): session closed for user root Dec 12 18:13:38.758000 audit[1877]: USER_END pid=1877 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:13:38.764144 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 18:13:38.764205 kernel: audit: type=1106 audit(1765563218.758:534): pid=1877 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:13:38.758000 audit[1877]: CRED_DISP pid=1877 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:13:38.781328 kernel: audit: type=1104 audit(1765563218.758:535): pid=1877 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 18:13:38.820801 sshd[1876]: Connection closed by 139.178.89.65 port 56676 Dec 12 18:13:38.821359 sshd-session[1873]: pam_unix(sshd:session): session closed for user core Dec 12 18:13:38.821000 audit[1873]: USER_END pid=1873 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:13:38.835700 kernel: audit: type=1106 audit(1765563218.821:536): pid=1873 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:13:38.836840 systemd-logind[1603]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:13:38.836914 systemd[1]: sshd@6-172.234.28.21:22-139.178.89.65:56676.service: Deactivated successfully. Dec 12 18:13:38.822000 audit[1873]: CRED_DISP pid=1873 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:13:38.843232 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:13:38.843799 systemd[1]: session-7.scope: Consumed 3.432s CPU time, 228.8M memory peak. Dec 12 18:13:38.848315 systemd-logind[1603]: Removed session 7. Dec 12 18:13:38.850332 kernel: audit: type=1104 audit(1765563218.822:537): pid=1873 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:13:38.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.234.28.21:22-139.178.89.65:56676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:38.864320 kernel: audit: type=1131 audit(1765563218.835:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.234.28.21:22-139.178.89.65:56676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:13:39.595000 audit[3189]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:39.602345 kernel: audit: type=1325 audit(1765563219.595:539): table=filter:105 family=2 entries=14 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:39.595000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcde392340 a2=0 a3=7ffcde39232c items=0 ppid=2909 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:39.638332 kernel: audit: type=1300 audit(1765563219.595:539): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcde392340 a2=0 a3=7ffcde39232c items=0 ppid=2909 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:39.595000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:39.639000 audit[3189]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:39.645193 kernel: audit: type=1327 audit(1765563219.595:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:39.645259 kernel: audit: type=1325 audit(1765563219.639:540): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:39.639000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcde392340 a2=0 a3=0 items=0 ppid=2909 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:39.650495 kernel: audit: type=1300 audit(1765563219.639:540): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcde392340 a2=0 a3=0 items=0 ppid=2909 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:39.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:39.665000 audit[3191]: NETFILTER_CFG table=filter:107 family=2 entries=15 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:39.665000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcbc6220a0 a2=0 a3=7ffcbc62208c items=0 ppid=2909 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:39.665000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:39.670000 audit[3191]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:39.670000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcbc6220a0 a2=0 a3=0 items=0 ppid=2909 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:39.670000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:41.842000 audit[3193]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:41.842000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc5d995250 a2=0 a3=7ffc5d99523c items=0 ppid=2909 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:41.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:41.847000 audit[3193]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:41.847000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5d995250 a2=0 a3=0 items=0 ppid=2909 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:41.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:41.871000 audit[3195]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:41.871000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffec35d3c40 a2=0 a3=7ffec35d3c2c items=0 ppid=2909 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:41.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:41.876000 audit[3195]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:41.876000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffec35d3c40 a2=0 a3=0 items=0 ppid=2909 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:41.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:42.586485 update_engine[1606]: I20251212 18:13:42.586385 1606 update_attempter.cc:509] Updating boot flags... Dec 12 18:13:42.902000 audit[3217]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:42.902000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe0755e990 a2=0 a3=7ffe0755e97c items=0 ppid=2909 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:42.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:42.907000 audit[3217]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:42.907000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe0755e990 a2=0 a3=0 items=0 ppid=2909 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:42.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:43.669533 systemd[1]: Created slice kubepods-besteffort-pod14c4954d_40ef_46fd_a87e_d1eaaf4caabc.slice - libcontainer container kubepods-besteffort-pod14c4954d_40ef_46fd_a87e_d1eaaf4caabc.slice. Dec 12 18:13:43.672755 kubelet[2802]: I1212 18:13:43.671889 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14c4954d-40ef-46fd-a87e-d1eaaf4caabc-tigera-ca-bundle\") pod \"calico-typha-7bcdc448db-l7dnf\" (UID: \"14c4954d-40ef-46fd-a87e-d1eaaf4caabc\") " pod="calico-system/calico-typha-7bcdc448db-l7dnf" Dec 12 18:13:43.672755 kubelet[2802]: I1212 18:13:43.672465 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvsx9\" (UniqueName: \"kubernetes.io/projected/14c4954d-40ef-46fd-a87e-d1eaaf4caabc-kube-api-access-zvsx9\") pod \"calico-typha-7bcdc448db-l7dnf\" (UID: \"14c4954d-40ef-46fd-a87e-d1eaaf4caabc\") " pod="calico-system/calico-typha-7bcdc448db-l7dnf" Dec 12 18:13:43.672755 kubelet[2802]: I1212 18:13:43.672653 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/14c4954d-40ef-46fd-a87e-d1eaaf4caabc-typha-certs\") pod \"calico-typha-7bcdc448db-l7dnf\" (UID: \"14c4954d-40ef-46fd-a87e-d1eaaf4caabc\") " pod="calico-system/calico-typha-7bcdc448db-l7dnf" Dec 12 18:13:43.907056 systemd[1]: Created slice kubepods-besteffort-pod8ad8993a_43b1_4907_9ca0_1f1219c7e8c9.slice - libcontainer container kubepods-besteffort-pod8ad8993a_43b1_4907_9ca0_1f1219c7e8c9.slice. Dec 12 18:13:43.928350 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 12 18:13:43.928454 kernel: audit: type=1325 audit(1765563223.924:549): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:43.924000 audit[3221]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:43.924000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff2305ef80 a2=0 a3=7fff2305ef6c items=0 ppid=2909 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:43.943337 kernel: audit: type=1300 audit(1765563223.924:549): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff2305ef80 a2=0 a3=7fff2305ef6c items=0 ppid=2909 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:43.924000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:43.952605 kernel: audit: type=1327 audit(1765563223.924:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:43.934000 audit[3221]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:43.964800 kernel: audit: type=1325 audit(1765563223.934:550): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:43.964889 kernel: audit: type=1300 audit(1765563223.934:550): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2305ef80 a2=0 a3=0 items=0 ppid=2909 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:43.934000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2305ef80 a2=0 a3=0 items=0 ppid=2909 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:43.969154 kernel: audit: type=1327 audit(1765563223.934:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:43.934000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:43.974071 kubelet[2802]: I1212 18:13:43.974009 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8ad8993a-43b1-4907-9ca0-1f1219c7e8c9-cni-log-dir\") pod \"calico-node-xbhkd\" (UID: \"8ad8993a-43b1-4907-9ca0-1f1219c7e8c9\") " pod="calico-system/calico-node-xbhkd" Dec 12 18:13:43.974145 kubelet[2802]: I1212 18:13:43.974074 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8ad8993a-43b1-4907-9ca0-1f1219c7e8c9-var-lib-calico\") pod \"calico-node-xbhkd\" (UID: \"8ad8993a-43b1-4907-9ca0-1f1219c7e8c9\") " pod="calico-system/calico-node-xbhkd" Dec 12 18:13:43.974145 kubelet[2802]: I1212 18:13:43.974100 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8ad8993a-43b1-4907-9ca0-1f1219c7e8c9-node-certs\") pod \"calico-node-xbhkd\" (UID: \"8ad8993a-43b1-4907-9ca0-1f1219c7e8c9\") " pod="calico-system/calico-node-xbhkd" Dec 12 18:13:43.974145 kubelet[2802]: I1212 18:13:43.974118 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ad8993a-43b1-4907-9ca0-1f1219c7e8c9-xtables-lock\") pod \"calico-node-xbhkd\" (UID: \"8ad8993a-43b1-4907-9ca0-1f1219c7e8c9\") " pod="calico-system/calico-node-xbhkd" Dec 12 18:13:43.974145 kubelet[2802]: I1212 18:13:43.974137 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8ad8993a-43b1-4907-9ca0-1f1219c7e8c9-var-run-calico\") pod \"calico-node-xbhkd\" (UID: \"8ad8993a-43b1-4907-9ca0-1f1219c7e8c9\") " pod="calico-system/calico-node-xbhkd" Dec 12 18:13:43.974253 kubelet[2802]: I1212 18:13:43.974154 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8ad8993a-43b1-4907-9ca0-1f1219c7e8c9-flexvol-driver-host\") pod \"calico-node-xbhkd\" (UID: \"8ad8993a-43b1-4907-9ca0-1f1219c7e8c9\") " pod="calico-system/calico-node-xbhkd" Dec 12 18:13:43.974253 kubelet[2802]: I1212 18:13:43.974172 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8ad8993a-43b1-4907-9ca0-1f1219c7e8c9-policysync\") pod \"calico-node-xbhkd\" (UID: \"8ad8993a-43b1-4907-9ca0-1f1219c7e8c9\") " pod="calico-system/calico-node-xbhkd" Dec 12 18:13:43.974253 kubelet[2802]: I1212 18:13:43.974191 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ad8993a-43b1-4907-9ca0-1f1219c7e8c9-tigera-ca-bundle\") pod \"calico-node-xbhkd\" (UID: \"8ad8993a-43b1-4907-9ca0-1f1219c7e8c9\") " pod="calico-system/calico-node-xbhkd" Dec 12 18:13:43.974253 kubelet[2802]: I1212 18:13:43.974208 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkt85\" (UniqueName: \"kubernetes.io/projected/8ad8993a-43b1-4907-9ca0-1f1219c7e8c9-kube-api-access-qkt85\") pod \"calico-node-xbhkd\" (UID: \"8ad8993a-43b1-4907-9ca0-1f1219c7e8c9\") " pod="calico-system/calico-node-xbhkd" Dec 12 18:13:43.974253 kubelet[2802]: I1212 18:13:43.974228 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8ad8993a-43b1-4907-9ca0-1f1219c7e8c9-cni-net-dir\") pod \"calico-node-xbhkd\" (UID: \"8ad8993a-43b1-4907-9ca0-1f1219c7e8c9\") " pod="calico-system/calico-node-xbhkd" Dec 12 18:13:43.974457 kubelet[2802]: I1212 18:13:43.974244 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8ad8993a-43b1-4907-9ca0-1f1219c7e8c9-cni-bin-dir\") pod \"calico-node-xbhkd\" (UID: \"8ad8993a-43b1-4907-9ca0-1f1219c7e8c9\") " pod="calico-system/calico-node-xbhkd" Dec 12 18:13:43.974457 kubelet[2802]: I1212 18:13:43.974263 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ad8993a-43b1-4907-9ca0-1f1219c7e8c9-lib-modules\") pod \"calico-node-xbhkd\" (UID: \"8ad8993a-43b1-4907-9ca0-1f1219c7e8c9\") " pod="calico-system/calico-node-xbhkd" Dec 12 18:13:43.975621 kubelet[2802]: E1212 18:13:43.975587 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:43.976418 containerd[1629]: time="2025-12-12T18:13:43.976346963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bcdc448db-l7dnf,Uid:14c4954d-40ef-46fd-a87e-d1eaaf4caabc,Namespace:calico-system,Attempt:0,}" Dec 12 18:13:44.010859 containerd[1629]: time="2025-12-12T18:13:44.010777493Z" level=info msg="connecting to shim 62effff33403284a504f4cdbf9ed59ef5e3fe55280b16dc51596fc595cfe0c0d" address="unix:///run/containerd/s/198866ecc3aa6c2c8b9ac2d0863ccff60e7067e1203b38bc83d67cbad9986753" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:13:44.049555 systemd[1]: Started cri-containerd-62effff33403284a504f4cdbf9ed59ef5e3fe55280b16dc51596fc595cfe0c0d.scope - libcontainer container 62effff33403284a504f4cdbf9ed59ef5e3fe55280b16dc51596fc595cfe0c0d. Dec 12 18:13:44.067000 audit: BPF prog-id=163 op=LOAD Dec 12 18:13:44.069000 audit: BPF prog-id=164 op=LOAD Dec 12 18:13:44.072525 kernel: audit: type=1334 audit(1765563224.067:551): prog-id=163 op=LOAD Dec 12 18:13:44.072581 kernel: audit: type=1334 audit(1765563224.069:552): prog-id=164 op=LOAD Dec 12 18:13:44.069000 audit[3243]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3231 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.087327 kernel: audit: type=1300 audit(1765563224.069:552): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3231 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.103178 kernel: audit: type=1327 audit(1765563224.069:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656666666633333430333238346135303466346364626639656435 Dec 12 18:13:44.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656666666633333430333238346135303466346364626639656435 Dec 12 18:13:44.105986 kubelet[2802]: E1212 18:13:44.105964 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.106107 kubelet[2802]: W1212 18:13:44.106092 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.106193 kubelet[2802]: E1212 18:13:44.106178 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.069000 audit: BPF prog-id=164 op=UNLOAD Dec 12 18:13:44.069000 audit[3243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3231 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656666666633333430333238346135303466346364626639656435 Dec 12 18:13:44.069000 audit: BPF prog-id=165 op=LOAD Dec 12 18:13:44.069000 audit[3243]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3231 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656666666633333430333238346135303466346364626639656435 Dec 12 18:13:44.069000 audit: BPF prog-id=166 op=LOAD Dec 12 18:13:44.069000 audit[3243]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3231 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656666666633333430333238346135303466346364626639656435 Dec 12 18:13:44.069000 audit: BPF prog-id=166 op=UNLOAD Dec 12 18:13:44.069000 audit[3243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3231 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656666666633333430333238346135303466346364626639656435 Dec 12 18:13:44.069000 audit: BPF prog-id=165 op=UNLOAD Dec 12 18:13:44.069000 audit[3243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3231 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656666666633333430333238346135303466346364626639656435 Dec 12 18:13:44.069000 audit: BPF prog-id=167 op=LOAD Dec 12 18:13:44.069000 audit[3243]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3231 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632656666666633333430333238346135303466346364626639656435 Dec 12 18:13:44.113932 kubelet[2802]: E1212 18:13:44.113874 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.113932 kubelet[2802]: W1212 18:13:44.113906 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.113932 kubelet[2802]: E1212 18:13:44.113932 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.150611 containerd[1629]: time="2025-12-12T18:13:44.150539513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bcdc448db-l7dnf,Uid:14c4954d-40ef-46fd-a87e-d1eaaf4caabc,Namespace:calico-system,Attempt:0,} returns sandbox id \"62effff33403284a504f4cdbf9ed59ef5e3fe55280b16dc51596fc595cfe0c0d\"" Dec 12 18:13:44.154324 kubelet[2802]: E1212 18:13:44.154249 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:44.156953 containerd[1629]: time="2025-12-12T18:13:44.156898853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:13:44.167479 kubelet[2802]: E1212 18:13:44.167436 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:13:44.171449 kubelet[2802]: E1212 18:13:44.171358 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.171449 kubelet[2802]: W1212 18:13:44.171379 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.171449 kubelet[2802]: E1212 18:13:44.171414 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.172508 kubelet[2802]: E1212 18:13:44.172142 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.172508 kubelet[2802]: W1212 18:13:44.172156 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.172508 kubelet[2802]: E1212 18:13:44.172165 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.172508 kubelet[2802]: E1212 18:13:44.172439 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.172508 kubelet[2802]: W1212 18:13:44.172447 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.172508 kubelet[2802]: E1212 18:13:44.172483 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.172947 kubelet[2802]: E1212 18:13:44.172738 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.172947 kubelet[2802]: W1212 18:13:44.172752 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.172947 kubelet[2802]: E1212 18:13:44.172762 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.173047 kubelet[2802]: E1212 18:13:44.172980 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.173047 kubelet[2802]: W1212 18:13:44.172989 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.173047 kubelet[2802]: E1212 18:13:44.172997 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.173317 kubelet[2802]: E1212 18:13:44.173206 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.173317 kubelet[2802]: W1212 18:13:44.173220 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.173317 kubelet[2802]: E1212 18:13:44.173228 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.174055 kubelet[2802]: E1212 18:13:44.173956 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.174055 kubelet[2802]: W1212 18:13:44.173976 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.174055 kubelet[2802]: E1212 18:13:44.173986 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.174340 kubelet[2802]: E1212 18:13:44.174246 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.174340 kubelet[2802]: W1212 18:13:44.174278 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.174340 kubelet[2802]: E1212 18:13:44.174288 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.174785 kubelet[2802]: E1212 18:13:44.174666 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.174785 kubelet[2802]: W1212 18:13:44.174688 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.174785 kubelet[2802]: E1212 18:13:44.174696 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.175194 kubelet[2802]: E1212 18:13:44.175132 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.175194 kubelet[2802]: W1212 18:13:44.175167 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.175194 kubelet[2802]: E1212 18:13:44.175176 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.175644 kubelet[2802]: E1212 18:13:44.175623 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.175644 kubelet[2802]: W1212 18:13:44.175638 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.175978 kubelet[2802]: E1212 18:13:44.175647 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.176432 kubelet[2802]: E1212 18:13:44.176389 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.176432 kubelet[2802]: W1212 18:13:44.176407 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.176432 kubelet[2802]: E1212 18:13:44.176425 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.176976 kubelet[2802]: E1212 18:13:44.176915 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.176976 kubelet[2802]: W1212 18:13:44.176931 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.176976 kubelet[2802]: E1212 18:13:44.176940 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.177333 kubelet[2802]: E1212 18:13:44.177284 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.177560 kubelet[2802]: W1212 18:13:44.177476 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.177560 kubelet[2802]: E1212 18:13:44.177496 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.181631 kubelet[2802]: E1212 18:13:44.179012 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.181631 kubelet[2802]: W1212 18:13:44.179038 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.181631 kubelet[2802]: E1212 18:13:44.179047 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.181631 kubelet[2802]: E1212 18:13:44.179477 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.181631 kubelet[2802]: W1212 18:13:44.179485 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.181631 kubelet[2802]: E1212 18:13:44.179494 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.182111 kubelet[2802]: E1212 18:13:44.182087 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.182111 kubelet[2802]: W1212 18:13:44.182109 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.182177 kubelet[2802]: E1212 18:13:44.182120 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.182352 kubelet[2802]: E1212 18:13:44.182329 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.182352 kubelet[2802]: W1212 18:13:44.182344 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.182352 kubelet[2802]: E1212 18:13:44.182353 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.182792 kubelet[2802]: E1212 18:13:44.182745 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.182792 kubelet[2802]: W1212 18:13:44.182766 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.182792 kubelet[2802]: E1212 18:13:44.182775 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.184136 kubelet[2802]: E1212 18:13:44.183874 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.184136 kubelet[2802]: W1212 18:13:44.183897 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.184136 kubelet[2802]: E1212 18:13:44.183911 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.184857 kubelet[2802]: E1212 18:13:44.184812 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.184857 kubelet[2802]: W1212 18:13:44.184833 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.184857 kubelet[2802]: E1212 18:13:44.184842 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.184955 kubelet[2802]: I1212 18:13:44.184890 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f46da395-0309-47b8-bfd7-ce69c3c79781-socket-dir\") pod \"csi-node-driver-8ggwr\" (UID: \"f46da395-0309-47b8-bfd7-ce69c3c79781\") " pod="calico-system/csi-node-driver-8ggwr" Dec 12 18:13:44.185250 kubelet[2802]: E1212 18:13:44.185126 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.185250 kubelet[2802]: W1212 18:13:44.185146 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.185250 kubelet[2802]: E1212 18:13:44.185168 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.185250 kubelet[2802]: I1212 18:13:44.185183 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f46da395-0309-47b8-bfd7-ce69c3c79781-kubelet-dir\") pod \"csi-node-driver-8ggwr\" (UID: \"f46da395-0309-47b8-bfd7-ce69c3c79781\") " pod="calico-system/csi-node-driver-8ggwr" Dec 12 18:13:44.185493 kubelet[2802]: E1212 18:13:44.185440 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.185493 kubelet[2802]: W1212 18:13:44.185450 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.185493 kubelet[2802]: E1212 18:13:44.185472 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.185493 kubelet[2802]: I1212 18:13:44.185486 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f46da395-0309-47b8-bfd7-ce69c3c79781-varrun\") pod \"csi-node-driver-8ggwr\" (UID: \"f46da395-0309-47b8-bfd7-ce69c3c79781\") " pod="calico-system/csi-node-driver-8ggwr" Dec 12 18:13:44.185938 kubelet[2802]: E1212 18:13:44.185704 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.185938 kubelet[2802]: W1212 18:13:44.185719 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.185938 kubelet[2802]: E1212 18:13:44.185731 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.185938 kubelet[2802]: I1212 18:13:44.185927 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6hbb\" (UniqueName: \"kubernetes.io/projected/f46da395-0309-47b8-bfd7-ce69c3c79781-kube-api-access-g6hbb\") pod \"csi-node-driver-8ggwr\" (UID: \"f46da395-0309-47b8-bfd7-ce69c3c79781\") " pod="calico-system/csi-node-driver-8ggwr" Dec 12 18:13:44.186208 kubelet[2802]: E1212 18:13:44.186120 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.186208 kubelet[2802]: W1212 18:13:44.186159 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.186272 kubelet[2802]: E1212 18:13:44.186240 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.186272 kubelet[2802]: I1212 18:13:44.186258 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f46da395-0309-47b8-bfd7-ce69c3c79781-registration-dir\") pod \"csi-node-driver-8ggwr\" (UID: \"f46da395-0309-47b8-bfd7-ce69c3c79781\") " pod="calico-system/csi-node-driver-8ggwr" Dec 12 18:13:44.187370 kubelet[2802]: E1212 18:13:44.186433 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.187370 kubelet[2802]: W1212 18:13:44.186441 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.187370 kubelet[2802]: E1212 18:13:44.186523 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.187370 kubelet[2802]: E1212 18:13:44.186858 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.187370 kubelet[2802]: W1212 18:13:44.186866 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.187370 kubelet[2802]: E1212 18:13:44.186946 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.187370 kubelet[2802]: E1212 18:13:44.187088 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.187370 kubelet[2802]: W1212 18:13:44.187096 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.187370 kubelet[2802]: E1212 18:13:44.187175 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.187370 kubelet[2802]: E1212 18:13:44.187340 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.187605 kubelet[2802]: W1212 18:13:44.187348 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.187605 kubelet[2802]: E1212 18:13:44.187371 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.187605 kubelet[2802]: E1212 18:13:44.187594 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.187684 kubelet[2802]: W1212 18:13:44.187606 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.187684 kubelet[2802]: E1212 18:13:44.187638 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.188475 kubelet[2802]: E1212 18:13:44.188065 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.188475 kubelet[2802]: W1212 18:13:44.188085 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.188475 kubelet[2802]: E1212 18:13:44.188093 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.188475 kubelet[2802]: E1212 18:13:44.188395 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.188475 kubelet[2802]: W1212 18:13:44.188405 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.188475 kubelet[2802]: E1212 18:13:44.188416 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.188899 kubelet[2802]: E1212 18:13:44.188682 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.188899 kubelet[2802]: W1212 18:13:44.188697 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.188899 kubelet[2802]: E1212 18:13:44.188705 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.188985 kubelet[2802]: E1212 18:13:44.188919 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.188985 kubelet[2802]: W1212 18:13:44.188929 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.188985 kubelet[2802]: E1212 18:13:44.188936 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.189316 kubelet[2802]: E1212 18:13:44.189149 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.189316 kubelet[2802]: W1212 18:13:44.189163 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.189316 kubelet[2802]: E1212 18:13:44.189171 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.211331 kubelet[2802]: E1212 18:13:44.210873 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:44.211760 containerd[1629]: time="2025-12-12T18:13:44.211710603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xbhkd,Uid:8ad8993a-43b1-4907-9ca0-1f1219c7e8c9,Namespace:calico-system,Attempt:0,}" Dec 12 18:13:44.239609 containerd[1629]: time="2025-12-12T18:13:44.238398273Z" level=info msg="connecting to shim 7d4859e1e39943190953ec64f6d8cb484644f48056c2fe4aa50cba8750b2d02a" address="unix:///run/containerd/s/b9a9d609c2d50ba11a9523b4078e8beba83dbe773372ac3aa83684f61f45df29" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:13:44.277918 systemd[1]: Started cri-containerd-7d4859e1e39943190953ec64f6d8cb484644f48056c2fe4aa50cba8750b2d02a.scope - libcontainer container 7d4859e1e39943190953ec64f6d8cb484644f48056c2fe4aa50cba8750b2d02a. Dec 12 18:13:44.287273 kubelet[2802]: E1212 18:13:44.287248 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.287411 kubelet[2802]: W1212 18:13:44.287396 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.287705 kubelet[2802]: E1212 18:13:44.287515 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.288419 kubelet[2802]: E1212 18:13:44.288406 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.288632 kubelet[2802]: W1212 18:13:44.288584 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.288730 kubelet[2802]: E1212 18:13:44.288714 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.289442 kubelet[2802]: E1212 18:13:44.289428 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.289529 kubelet[2802]: W1212 18:13:44.289517 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.289743 kubelet[2802]: E1212 18:13:44.289723 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.290094 kubelet[2802]: E1212 18:13:44.290081 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.290161 kubelet[2802]: W1212 18:13:44.290138 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.290333 kubelet[2802]: E1212 18:13:44.290262 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.290855 kubelet[2802]: E1212 18:13:44.290772 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.290855 kubelet[2802]: W1212 18:13:44.290784 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.291161 kubelet[2802]: E1212 18:13:44.290946 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.291341 kubelet[2802]: E1212 18:13:44.291329 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.291416 kubelet[2802]: W1212 18:13:44.291405 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.291687 kubelet[2802]: E1212 18:13:44.291633 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.292458 kubelet[2802]: E1212 18:13:44.292398 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.292458 kubelet[2802]: W1212 18:13:44.292452 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.293065 kubelet[2802]: E1212 18:13:44.292986 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.293378 kubelet[2802]: E1212 18:13:44.293276 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.293378 kubelet[2802]: W1212 18:13:44.293356 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.293627 kubelet[2802]: E1212 18:13:44.293591 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.295342 kubelet[2802]: E1212 18:13:44.295282 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.295342 kubelet[2802]: W1212 18:13:44.295334 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.296115 kubelet[2802]: E1212 18:13:44.296084 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.296277 kubelet[2802]: W1212 18:13:44.296231 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.297070 kubelet[2802]: E1212 18:13:44.297023 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.297070 kubelet[2802]: E1212 18:13:44.297051 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.297789 kubelet[2802]: E1212 18:13:44.297759 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.297789 kubelet[2802]: W1212 18:13:44.297778 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.299453 kubelet[2802]: E1212 18:13:44.299410 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.299929 kubelet[2802]: E1212 18:13:44.299896 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.299984 kubelet[2802]: W1212 18:13:44.299967 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.300522 kubelet[2802]: E1212 18:13:44.300489 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.302327 kubelet[2802]: E1212 18:13:44.302238 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.302327 kubelet[2802]: W1212 18:13:44.302257 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.303359 kubelet[2802]: E1212 18:13:44.303287 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.304486 kubelet[2802]: E1212 18:13:44.304286 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.304486 kubelet[2802]: W1212 18:13:44.304481 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.303000 audit: BPF prog-id=168 op=LOAD Dec 12 18:13:44.305351 kubelet[2802]: E1212 18:13:44.305241 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.305351 kubelet[2802]: W1212 18:13:44.305257 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.305514 kubelet[2802]: E1212 18:13:44.305490 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.305514 kubelet[2802]: E1212 18:13:44.305512 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.304000 audit: BPF prog-id=169 op=LOAD Dec 12 18:13:44.305882 kubelet[2802]: E1212 18:13:44.305785 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.305882 kubelet[2802]: W1212 18:13:44.305793 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.305882 kubelet[2802]: E1212 18:13:44.305833 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.304000 audit[3338]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=3327 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343835396531653339393433313930393533656336346636643863 Dec 12 18:13:44.306246 kubelet[2802]: E1212 18:13:44.306067 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.306246 kubelet[2802]: W1212 18:13:44.306081 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.306246 kubelet[2802]: E1212 18:13:44.306181 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.305000 audit: BPF prog-id=169 op=UNLOAD Dec 12 18:13:44.305000 audit[3338]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3327 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343835396531653339393433313930393533656336346636643863 Dec 12 18:13:44.305000 audit: BPF prog-id=170 op=LOAD Dec 12 18:13:44.305000 audit[3338]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=3327 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343835396531653339393433313930393533656336346636643863 Dec 12 18:13:44.305000 audit: BPF prog-id=171 op=LOAD Dec 12 18:13:44.305000 audit[3338]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=3327 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343835396531653339393433313930393533656336346636643863 Dec 12 18:13:44.305000 audit: BPF prog-id=171 op=UNLOAD Dec 12 18:13:44.305000 audit[3338]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3327 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343835396531653339393433313930393533656336346636643863 Dec 12 18:13:44.305000 audit: BPF prog-id=170 op=UNLOAD Dec 12 18:13:44.305000 audit[3338]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3327 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343835396531653339393433313930393533656336346636643863 Dec 12 18:13:44.305000 audit: BPF prog-id=172 op=LOAD Dec 12 18:13:44.305000 audit[3338]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=3327 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:44.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343835396531653339393433313930393533656336346636643863 Dec 12 18:13:44.308111 kubelet[2802]: E1212 18:13:44.306445 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.308111 kubelet[2802]: W1212 18:13:44.306454 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.308111 kubelet[2802]: E1212 18:13:44.306492 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.308111 kubelet[2802]: E1212 18:13:44.306771 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.308111 kubelet[2802]: W1212 18:13:44.306800 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.308111 kubelet[2802]: E1212 18:13:44.306878 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.308111 kubelet[2802]: E1212 18:13:44.307200 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.308111 kubelet[2802]: W1212 18:13:44.307208 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.308111 kubelet[2802]: E1212 18:13:44.307350 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.309823 kubelet[2802]: E1212 18:13:44.308945 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.309823 kubelet[2802]: W1212 18:13:44.308959 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.309823 kubelet[2802]: E1212 18:13:44.309013 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.310114 kubelet[2802]: E1212 18:13:44.310059 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.310176 kubelet[2802]: W1212 18:13:44.310162 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.310313 kubelet[2802]: E1212 18:13:44.310260 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.310558 kubelet[2802]: E1212 18:13:44.310545 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.310748 kubelet[2802]: W1212 18:13:44.310734 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.310938 kubelet[2802]: E1212 18:13:44.310925 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.311281 kubelet[2802]: E1212 18:13:44.311236 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.311281 kubelet[2802]: W1212 18:13:44.311247 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.311445 kubelet[2802]: E1212 18:13:44.311424 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.311925 kubelet[2802]: E1212 18:13:44.311833 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.312004 kubelet[2802]: W1212 18:13:44.311980 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.312082 kubelet[2802]: E1212 18:13:44.312052 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.327063 kubelet[2802]: E1212 18:13:44.327029 2802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:13:44.327119 kubelet[2802]: W1212 18:13:44.327070 2802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:13:44.327119 kubelet[2802]: E1212 18:13:44.327089 2802 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:13:44.334256 containerd[1629]: time="2025-12-12T18:13:44.334220053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xbhkd,Uid:8ad8993a-43b1-4907-9ca0-1f1219c7e8c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d4859e1e39943190953ec64f6d8cb484644f48056c2fe4aa50cba8750b2d02a\"" Dec 12 18:13:44.335513 kubelet[2802]: E1212 18:13:44.335456 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:44.932354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904889706.mount: Deactivated successfully. Dec 12 18:13:45.480364 kubelet[2802]: E1212 18:13:45.480107 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:13:45.598189 containerd[1629]: time="2025-12-12T18:13:45.597312393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:45.598189 containerd[1629]: time="2025-12-12T18:13:45.598156233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 12 18:13:45.598780 containerd[1629]: time="2025-12-12T18:13:45.598753513Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:45.600525 containerd[1629]: time="2025-12-12T18:13:45.600503143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:45.601031 containerd[1629]: time="2025-12-12T18:13:45.600976163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.44401603s" Dec 12 18:13:45.601072 containerd[1629]: time="2025-12-12T18:13:45.601031963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:13:45.603549 containerd[1629]: time="2025-12-12T18:13:45.603522183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:13:45.624560 containerd[1629]: time="2025-12-12T18:13:45.624263413Z" level=info msg="CreateContainer within sandbox \"62effff33403284a504f4cdbf9ed59ef5e3fe55280b16dc51596fc595cfe0c0d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:13:45.635916 containerd[1629]: time="2025-12-12T18:13:45.635865713Z" level=info msg="Container 87eebc70a40b1cde5a171c4e5b054386f954a6b47442a3c35fe3fd0c7c798185: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:13:45.643827 containerd[1629]: time="2025-12-12T18:13:45.643802293Z" level=info msg="CreateContainer within sandbox \"62effff33403284a504f4cdbf9ed59ef5e3fe55280b16dc51596fc595cfe0c0d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"87eebc70a40b1cde5a171c4e5b054386f954a6b47442a3c35fe3fd0c7c798185\"" Dec 12 18:13:45.646750 containerd[1629]: time="2025-12-12T18:13:45.646714273Z" level=info msg="StartContainer for \"87eebc70a40b1cde5a171c4e5b054386f954a6b47442a3c35fe3fd0c7c798185\"" Dec 12 18:13:45.651501 containerd[1629]: time="2025-12-12T18:13:45.651409943Z" level=info msg="connecting to shim 87eebc70a40b1cde5a171c4e5b054386f954a6b47442a3c35fe3fd0c7c798185" address="unix:///run/containerd/s/198866ecc3aa6c2c8b9ac2d0863ccff60e7067e1203b38bc83d67cbad9986753" protocol=ttrpc version=3 Dec 12 18:13:45.679637 systemd[1]: Started cri-containerd-87eebc70a40b1cde5a171c4e5b054386f954a6b47442a3c35fe3fd0c7c798185.scope - libcontainer container 87eebc70a40b1cde5a171c4e5b054386f954a6b47442a3c35fe3fd0c7c798185. Dec 12 18:13:45.704000 audit: BPF prog-id=173 op=LOAD Dec 12 18:13:45.705000 audit: BPF prog-id=174 op=LOAD Dec 12 18:13:45.705000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3231 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:45.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837656562633730613430623163646535613137316334653562303534 Dec 12 18:13:45.705000 audit: BPF prog-id=174 op=UNLOAD Dec 12 18:13:45.705000 audit[3399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3231 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:45.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837656562633730613430623163646535613137316334653562303534 Dec 12 18:13:45.705000 audit: BPF prog-id=175 op=LOAD Dec 12 18:13:45.705000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3231 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:45.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837656562633730613430623163646535613137316334653562303534 Dec 12 18:13:45.705000 audit: BPF prog-id=176 op=LOAD Dec 12 18:13:45.705000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3231 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:45.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837656562633730613430623163646535613137316334653562303534 Dec 12 18:13:45.705000 audit: BPF prog-id=176 op=UNLOAD Dec 12 18:13:45.705000 audit[3399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3231 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:45.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837656562633730613430623163646535613137316334653562303534 Dec 12 18:13:45.705000 audit: BPF prog-id=175 op=UNLOAD Dec 12 18:13:45.705000 audit[3399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3231 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:45.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837656562633730613430623163646535613137316334653562303534 Dec 12 18:13:45.706000 audit: BPF prog-id=177 op=LOAD Dec 12 18:13:45.706000 audit[3399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3231 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:45.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837656562633730613430623163646535613137316334653562303534 Dec 12 18:13:45.749390 containerd[1629]: time="2025-12-12T18:13:45.749149023Z" level=info msg="StartContainer for \"87eebc70a40b1cde5a171c4e5b054386f954a6b47442a3c35fe3fd0c7c798185\" returns successfully" Dec 12 18:13:46.385170 containerd[1629]: time="2025-12-12T18:13:46.385048953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:46.386789 containerd[1629]: time="2025-12-12T18:13:46.386750713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 18:13:46.387588 containerd[1629]: time="2025-12-12T18:13:46.387536813Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:46.389614 containerd[1629]: time="2025-12-12T18:13:46.389545253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:46.390293 containerd[1629]: time="2025-12-12T18:13:46.390254543Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 786.3234ms" Dec 12 18:13:46.390373 containerd[1629]: time="2025-12-12T18:13:46.390353063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:13:46.393989 containerd[1629]: time="2025-12-12T18:13:46.393949163Z" level=info msg="CreateContainer within sandbox \"7d4859e1e39943190953ec64f6d8cb484644f48056c2fe4aa50cba8750b2d02a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:13:46.404475 containerd[1629]: time="2025-12-12T18:13:46.403455743Z" level=info msg="Container 8608374bd98a8bbea89db25e78905bc7530aa6a608a2e0a14804cb59d597f124: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:13:46.415918 containerd[1629]: time="2025-12-12T18:13:46.415866903Z" level=info msg="CreateContainer within sandbox \"7d4859e1e39943190953ec64f6d8cb484644f48056c2fe4aa50cba8750b2d02a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8608374bd98a8bbea89db25e78905bc7530aa6a608a2e0a14804cb59d597f124\"" Dec 12 18:13:46.416475 containerd[1629]: time="2025-12-12T18:13:46.416442463Z" level=info msg="StartContainer for \"8608374bd98a8bbea89db25e78905bc7530aa6a608a2e0a14804cb59d597f124\"" Dec 12 18:13:46.419106 containerd[1629]: time="2025-12-12T18:13:46.419077073Z" level=info msg="connecting to shim 8608374bd98a8bbea89db25e78905bc7530aa6a608a2e0a14804cb59d597f124" address="unix:///run/containerd/s/b9a9d609c2d50ba11a9523b4078e8beba83dbe773372ac3aa83684f61f45df29" protocol=ttrpc version=3 Dec 12 18:13:46.442488 systemd[1]: Started cri-containerd-8608374bd98a8bbea89db25e78905bc7530aa6a608a2e0a14804cb59d597f124.scope - libcontainer container 8608374bd98a8bbea89db25e78905bc7530aa6a608a2e0a14804cb59d597f124. Dec 12 18:13:46.498000 audit: BPF prog-id=178 op=LOAD Dec 12 18:13:46.498000 audit[3439]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3327 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:46.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836303833373462643938613862626561383964623235653738393035 Dec 12 18:13:46.498000 audit: BPF prog-id=179 op=LOAD Dec 12 18:13:46.498000 audit[3439]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3327 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:46.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836303833373462643938613862626561383964623235653738393035 Dec 12 18:13:46.498000 audit: BPF prog-id=179 op=UNLOAD Dec 12 18:13:46.498000 audit[3439]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3327 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:46.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836303833373462643938613862626561383964623235653738393035 Dec 12 18:13:46.498000 audit: BPF prog-id=178 op=UNLOAD Dec 12 18:13:46.498000 audit[3439]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3327 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:46.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836303833373462643938613862626561383964623235653738393035 Dec 12 18:13:46.499000 audit: BPF prog-id=180 op=LOAD Dec 12 18:13:46.499000 audit[3439]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3327 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:46.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836303833373462643938613862626561383964623235653738393035 Dec 12 18:13:46.534507 containerd[1629]: time="2025-12-12T18:13:46.533421373Z" level=info msg="StartContainer for \"8608374bd98a8bbea89db25e78905bc7530aa6a608a2e0a14804cb59d597f124\" returns successfully" Dec 12 18:13:46.556958 systemd[1]: cri-containerd-8608374bd98a8bbea89db25e78905bc7530aa6a608a2e0a14804cb59d597f124.scope: Deactivated successfully. Dec 12 18:13:46.559000 audit: BPF prog-id=180 op=UNLOAD Dec 12 18:13:46.565752 containerd[1629]: time="2025-12-12T18:13:46.565673203Z" level=info msg="received container exit event container_id:\"8608374bd98a8bbea89db25e78905bc7530aa6a608a2e0a14804cb59d597f124\" id:\"8608374bd98a8bbea89db25e78905bc7530aa6a608a2e0a14804cb59d597f124\" pid:3452 exited_at:{seconds:1765563226 nanos:563185253}" Dec 12 18:13:46.585010 kubelet[2802]: E1212 18:13:46.584985 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:46.590616 kubelet[2802]: E1212 18:13:46.590595 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:46.613735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8608374bd98a8bbea89db25e78905bc7530aa6a608a2e0a14804cb59d597f124-rootfs.mount: Deactivated successfully. Dec 12 18:13:46.640944 kubelet[2802]: I1212 18:13:46.638587 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bcdc448db-l7dnf" podStartSLOduration=2.193138133 podStartE2EDuration="3.638568423s" podCreationTimestamp="2025-12-12 18:13:43 +0000 UTC" firstStartedPulling="2025-12-12 18:13:44.156542293 +0000 UTC m=+18.793560611" lastFinishedPulling="2025-12-12 18:13:45.601972593 +0000 UTC m=+20.238990901" observedRunningTime="2025-12-12 18:13:46.636771913 +0000 UTC m=+21.273790231" watchObservedRunningTime="2025-12-12 18:13:46.638568423 +0000 UTC m=+21.275586741" Dec 12 18:13:47.479495 kubelet[2802]: E1212 18:13:47.478990 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:13:47.593614 kubelet[2802]: I1212 18:13:47.593572 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:13:47.594102 kubelet[2802]: E1212 18:13:47.593913 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:47.594395 kubelet[2802]: E1212 18:13:47.594377 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:47.596002 containerd[1629]: time="2025-12-12T18:13:47.595954963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:13:49.479661 kubelet[2802]: E1212 18:13:49.479590 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:13:49.518128 containerd[1629]: time="2025-12-12T18:13:49.518077343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:49.519171 containerd[1629]: time="2025-12-12T18:13:49.518981013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 12 18:13:49.519692 containerd[1629]: time="2025-12-12T18:13:49.519659353Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:49.521829 containerd[1629]: time="2025-12-12T18:13:49.521794863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:49.522245 containerd[1629]: time="2025-12-12T18:13:49.522211033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.9262181s" Dec 12 18:13:49.522308 containerd[1629]: time="2025-12-12T18:13:49.522245523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:13:49.525094 containerd[1629]: time="2025-12-12T18:13:49.525068463Z" level=info msg="CreateContainer within sandbox \"7d4859e1e39943190953ec64f6d8cb484644f48056c2fe4aa50cba8750b2d02a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:13:49.536482 containerd[1629]: time="2025-12-12T18:13:49.534577733Z" level=info msg="Container 4784bb613352dd5b2899b53388e70d5b5be78e9885b642e7c916d87257b009f6: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:13:49.540388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1850145646.mount: Deactivated successfully. Dec 12 18:13:49.546473 containerd[1629]: time="2025-12-12T18:13:49.546440183Z" level=info msg="CreateContainer within sandbox \"7d4859e1e39943190953ec64f6d8cb484644f48056c2fe4aa50cba8750b2d02a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4784bb613352dd5b2899b53388e70d5b5be78e9885b642e7c916d87257b009f6\"" Dec 12 18:13:49.547566 containerd[1629]: time="2025-12-12T18:13:49.547542013Z" level=info msg="StartContainer for \"4784bb613352dd5b2899b53388e70d5b5be78e9885b642e7c916d87257b009f6\"" Dec 12 18:13:49.551421 containerd[1629]: time="2025-12-12T18:13:49.551384043Z" level=info msg="connecting to shim 4784bb613352dd5b2899b53388e70d5b5be78e9885b642e7c916d87257b009f6" address="unix:///run/containerd/s/b9a9d609c2d50ba11a9523b4078e8beba83dbe773372ac3aa83684f61f45df29" protocol=ttrpc version=3 Dec 12 18:13:49.583481 systemd[1]: Started cri-containerd-4784bb613352dd5b2899b53388e70d5b5be78e9885b642e7c916d87257b009f6.scope - libcontainer container 4784bb613352dd5b2899b53388e70d5b5be78e9885b642e7c916d87257b009f6. Dec 12 18:13:49.669390 kernel: kauditd_printk_skb: 78 callbacks suppressed Dec 12 18:13:49.669535 kernel: audit: type=1334 audit(1765563229.666:581): prog-id=181 op=LOAD Dec 12 18:13:49.666000 audit: BPF prog-id=181 op=LOAD Dec 12 18:13:49.666000 audit[3499]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3327 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:49.683857 kernel: audit: type=1300 audit(1765563229.666:581): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3327 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:49.684033 kernel: audit: type=1327 audit(1765563229.666:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437383462623631333335326464356232383939623533333838653730 Dec 12 18:13:49.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437383462623631333335326464356232383939623533333838653730 Dec 12 18:13:49.691955 kernel: audit: type=1334 audit(1765563229.666:582): prog-id=182 op=LOAD Dec 12 18:13:49.666000 audit: BPF prog-id=182 op=LOAD Dec 12 18:13:49.700373 kernel: audit: type=1300 audit(1765563229.666:582): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3327 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:49.666000 audit[3499]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3327 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:49.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437383462623631333335326464356232383939623533333838653730 Dec 12 18:13:49.709526 kernel: audit: type=1327 audit(1765563229.666:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437383462623631333335326464356232383939623533333838653730 Dec 12 18:13:49.709740 kernel: audit: type=1334 audit(1765563229.666:583): prog-id=182 op=UNLOAD Dec 12 18:13:49.666000 audit: BPF prog-id=182 op=UNLOAD Dec 12 18:13:49.666000 audit[3499]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3327 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:49.711911 kernel: audit: type=1300 audit(1765563229.666:583): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3327 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:49.723363 kernel: audit: type=1327 audit(1765563229.666:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437383462623631333335326464356232383939623533333838653730 Dec 12 18:13:49.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437383462623631333335326464356232383939623533333838653730 Dec 12 18:13:49.666000 audit: BPF prog-id=181 op=UNLOAD Dec 12 18:13:49.728164 kernel: audit: type=1334 audit(1765563229.666:584): prog-id=181 op=UNLOAD Dec 12 18:13:49.666000 audit[3499]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3327 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:49.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437383462623631333335326464356232383939623533333838653730 Dec 12 18:13:49.666000 audit: BPF prog-id=183 op=LOAD Dec 12 18:13:49.666000 audit[3499]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3327 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:49.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437383462623631333335326464356232383939623533333838653730 Dec 12 18:13:49.732486 containerd[1629]: time="2025-12-12T18:13:49.732255103Z" level=info msg="StartContainer for \"4784bb613352dd5b2899b53388e70d5b5be78e9885b642e7c916d87257b009f6\" returns successfully" Dec 12 18:13:50.230461 containerd[1629]: time="2025-12-12T18:13:50.230357563Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:13:50.233161 systemd[1]: cri-containerd-4784bb613352dd5b2899b53388e70d5b5be78e9885b642e7c916d87257b009f6.scope: Deactivated successfully. Dec 12 18:13:50.233778 systemd[1]: cri-containerd-4784bb613352dd5b2899b53388e70d5b5be78e9885b642e7c916d87257b009f6.scope: Consumed 572ms CPU time, 194.1M memory peak, 171.3M written to disk. Dec 12 18:13:50.237115 containerd[1629]: time="2025-12-12T18:13:50.237087323Z" level=info msg="received container exit event container_id:\"4784bb613352dd5b2899b53388e70d5b5be78e9885b642e7c916d87257b009f6\" id:\"4784bb613352dd5b2899b53388e70d5b5be78e9885b642e7c916d87257b009f6\" pid:3512 exited_at:{seconds:1765563230 nanos:236893333}" Dec 12 18:13:50.236000 audit: BPF prog-id=183 op=UNLOAD Dec 12 18:13:50.264922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4784bb613352dd5b2899b53388e70d5b5be78e9885b642e7c916d87257b009f6-rootfs.mount: Deactivated successfully. Dec 12 18:13:50.332215 kubelet[2802]: I1212 18:13:50.332193 2802 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:13:50.363055 systemd[1]: Created slice kubepods-burstable-pod00c15d96_e0b6_4ce0_a16d_0db018401241.slice - libcontainer container kubepods-burstable-pod00c15d96_e0b6_4ce0_a16d_0db018401241.slice. Dec 12 18:13:50.376983 systemd[1]: Created slice kubepods-besteffort-pod0e30af07_2513_48c2_a9a7_015103c4abdc.slice - libcontainer container kubepods-besteffort-pod0e30af07_2513_48c2_a9a7_015103c4abdc.slice. Dec 12 18:13:50.382635 kubelet[2802]: W1212 18:13:50.382030 2802 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:172-234-28-21" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node '172-234-28-21' and this object Dec 12 18:13:50.382990 kubelet[2802]: E1212 18:13:50.382951 2802 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:172-234-28-21\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-234-28-21' and this object" logger="UnhandledError" Dec 12 18:13:50.383875 kubelet[2802]: W1212 18:13:50.382277 2802 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:172-234-28-21" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node '172-234-28-21' and this object Dec 12 18:13:50.383875 kubelet[2802]: E1212 18:13:50.383860 2802 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:172-234-28-21\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-234-28-21' and this object" logger="UnhandledError" Dec 12 18:13:50.388970 systemd[1]: Created slice kubepods-besteffort-podb35a9cda_d256_490b_8223_d4936abd6ff5.slice - libcontainer container kubepods-besteffort-podb35a9cda_d256_490b_8223_d4936abd6ff5.slice. Dec 12 18:13:50.400144 systemd[1]: Created slice kubepods-burstable-pod7d6b7ada_d360_4468_8cf8_f61bde72489e.slice - libcontainer container kubepods-burstable-pod7d6b7ada_d360_4468_8cf8_f61bde72489e.slice. Dec 12 18:13:50.407023 systemd[1]: Created slice kubepods-besteffort-podc713cd34_08f7_480c_b91d_bedb3b68bb36.slice - libcontainer container kubepods-besteffort-podc713cd34_08f7_480c_b91d_bedb3b68bb36.slice. Dec 12 18:13:50.414604 systemd[1]: Created slice kubepods-besteffort-pod415d4ab0_257f_4751_838c_4b86e1cd5e79.slice - libcontainer container kubepods-besteffort-pod415d4ab0_257f_4751_838c_4b86e1cd5e79.slice. Dec 12 18:13:50.421602 systemd[1]: Created slice kubepods-besteffort-pod38a7f512_410f_47be_bd45_a402f5067f03.slice - libcontainer container kubepods-besteffort-pod38a7f512_410f_47be_bd45_a402f5067f03.slice. Dec 12 18:13:50.439514 kubelet[2802]: I1212 18:13:50.439330 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c713cd34-08f7-480c-b91d-bedb3b68bb36-config\") pod \"goldmane-666569f655-86pl8\" (UID: \"c713cd34-08f7-480c-b91d-bedb3b68bb36\") " pod="calico-system/goldmane-666569f655-86pl8" Dec 12 18:13:50.439958 kubelet[2802]: I1212 18:13:50.439672 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrlh\" (UniqueName: \"kubernetes.io/projected/0e30af07-2513-48c2-a9a7-015103c4abdc-kube-api-access-pdrlh\") pod \"whisker-67d57dbc99-qt96x\" (UID: \"0e30af07-2513-48c2-a9a7-015103c4abdc\") " pod="calico-system/whisker-67d57dbc99-qt96x" Dec 12 18:13:50.439958 kubelet[2802]: I1212 18:13:50.439695 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csz9j\" (UniqueName: \"kubernetes.io/projected/c713cd34-08f7-480c-b91d-bedb3b68bb36-kube-api-access-csz9j\") pod \"goldmane-666569f655-86pl8\" (UID: \"c713cd34-08f7-480c-b91d-bedb3b68bb36\") " pod="calico-system/goldmane-666569f655-86pl8" Dec 12 18:13:50.439958 kubelet[2802]: I1212 18:13:50.439710 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxlxx\" (UniqueName: \"kubernetes.io/projected/b35a9cda-d256-490b-8223-d4936abd6ff5-kube-api-access-fxlxx\") pod \"calico-apiserver-6cbcf5f67f-dvvv5\" (UID: \"b35a9cda-d256-490b-8223-d4936abd6ff5\") " pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" Dec 12 18:13:50.439958 kubelet[2802]: I1212 18:13:50.439726 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a7f512-410f-47be-bd45-a402f5067f03-tigera-ca-bundle\") pod \"calico-kube-controllers-6ccfb466b6-9s5wz\" (UID: \"38a7f512-410f-47be-bd45-a402f5067f03\") " pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" Dec 12 18:13:50.439958 kubelet[2802]: I1212 18:13:50.439741 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/415d4ab0-257f-4751-838c-4b86e1cd5e79-calico-apiserver-certs\") pod \"calico-apiserver-6cbcf5f67f-bmmdp\" (UID: \"415d4ab0-257f-4751-838c-4b86e1cd5e79\") " pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" Dec 12 18:13:50.440141 kubelet[2802]: I1212 18:13:50.439755 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e30af07-2513-48c2-a9a7-015103c4abdc-whisker-backend-key-pair\") pod \"whisker-67d57dbc99-qt96x\" (UID: \"0e30af07-2513-48c2-a9a7-015103c4abdc\") " pod="calico-system/whisker-67d57dbc99-qt96x" Dec 12 18:13:50.440141 kubelet[2802]: I1212 18:13:50.439772 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98r9t\" (UniqueName: \"kubernetes.io/projected/00c15d96-e0b6-4ce0-a16d-0db018401241-kube-api-access-98r9t\") pod \"coredns-668d6bf9bc-zljqj\" (UID: \"00c15d96-e0b6-4ce0-a16d-0db018401241\") " pod="kube-system/coredns-668d6bf9bc-zljqj" Dec 12 18:13:50.440141 kubelet[2802]: I1212 18:13:50.439785 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq54f\" (UniqueName: \"kubernetes.io/projected/7d6b7ada-d360-4468-8cf8-f61bde72489e-kube-api-access-nq54f\") pod \"coredns-668d6bf9bc-86vzh\" (UID: \"7d6b7ada-d360-4468-8cf8-f61bde72489e\") " pod="kube-system/coredns-668d6bf9bc-86vzh" Dec 12 18:13:50.440141 kubelet[2802]: I1212 18:13:50.439801 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9kts\" (UniqueName: \"kubernetes.io/projected/38a7f512-410f-47be-bd45-a402f5067f03-kube-api-access-x9kts\") pod \"calico-kube-controllers-6ccfb466b6-9s5wz\" (UID: \"38a7f512-410f-47be-bd45-a402f5067f03\") " pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" Dec 12 18:13:50.440141 kubelet[2802]: I1212 18:13:50.439815 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh5cv\" (UniqueName: \"kubernetes.io/projected/415d4ab0-257f-4751-838c-4b86e1cd5e79-kube-api-access-jh5cv\") pod \"calico-apiserver-6cbcf5f67f-bmmdp\" (UID: \"415d4ab0-257f-4751-838c-4b86e1cd5e79\") " pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" Dec 12 18:13:50.440261 kubelet[2802]: I1212 18:13:50.439829 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b35a9cda-d256-490b-8223-d4936abd6ff5-calico-apiserver-certs\") pod \"calico-apiserver-6cbcf5f67f-dvvv5\" (UID: \"b35a9cda-d256-490b-8223-d4936abd6ff5\") " pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" Dec 12 18:13:50.440261 kubelet[2802]: I1212 18:13:50.439843 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c713cd34-08f7-480c-b91d-bedb3b68bb36-goldmane-key-pair\") pod \"goldmane-666569f655-86pl8\" (UID: \"c713cd34-08f7-480c-b91d-bedb3b68bb36\") " pod="calico-system/goldmane-666569f655-86pl8" Dec 12 18:13:50.440261 kubelet[2802]: I1212 18:13:50.439859 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e30af07-2513-48c2-a9a7-015103c4abdc-whisker-ca-bundle\") pod \"whisker-67d57dbc99-qt96x\" (UID: \"0e30af07-2513-48c2-a9a7-015103c4abdc\") " pod="calico-system/whisker-67d57dbc99-qt96x" Dec 12 18:13:50.440261 kubelet[2802]: I1212 18:13:50.439872 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00c15d96-e0b6-4ce0-a16d-0db018401241-config-volume\") pod \"coredns-668d6bf9bc-zljqj\" (UID: \"00c15d96-e0b6-4ce0-a16d-0db018401241\") " pod="kube-system/coredns-668d6bf9bc-zljqj" Dec 12 18:13:50.440261 kubelet[2802]: I1212 18:13:50.439886 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c713cd34-08f7-480c-b91d-bedb3b68bb36-goldmane-ca-bundle\") pod \"goldmane-666569f655-86pl8\" (UID: \"c713cd34-08f7-480c-b91d-bedb3b68bb36\") " pod="calico-system/goldmane-666569f655-86pl8" Dec 12 18:13:50.440496 kubelet[2802]: I1212 18:13:50.439913 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d6b7ada-d360-4468-8cf8-f61bde72489e-config-volume\") pod \"coredns-668d6bf9bc-86vzh\" (UID: \"7d6b7ada-d360-4468-8cf8-f61bde72489e\") " pod="kube-system/coredns-668d6bf9bc-86vzh" Dec 12 18:13:50.616332 kubelet[2802]: E1212 18:13:50.616275 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:50.618071 containerd[1629]: time="2025-12-12T18:13:50.618045623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:13:50.673071 kubelet[2802]: E1212 18:13:50.673033 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:50.674563 containerd[1629]: time="2025-12-12T18:13:50.674513183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zljqj,Uid:00c15d96-e0b6-4ce0-a16d-0db018401241,Namespace:kube-system,Attempt:0,}" Dec 12 18:13:50.686395 containerd[1629]: time="2025-12-12T18:13:50.686354843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d57dbc99-qt96x,Uid:0e30af07-2513-48c2-a9a7-015103c4abdc,Namespace:calico-system,Attempt:0,}" Dec 12 18:13:50.695374 containerd[1629]: time="2025-12-12T18:13:50.695206623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbcf5f67f-dvvv5,Uid:b35a9cda-d256-490b-8223-d4936abd6ff5,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:13:50.705890 kubelet[2802]: E1212 18:13:50.705850 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:50.707566 containerd[1629]: time="2025-12-12T18:13:50.707277763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86vzh,Uid:7d6b7ada-d360-4468-8cf8-f61bde72489e,Namespace:kube-system,Attempt:0,}" Dec 12 18:13:50.719657 containerd[1629]: time="2025-12-12T18:13:50.719613463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbcf5f67f-bmmdp,Uid:415d4ab0-257f-4751-838c-4b86e1cd5e79,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:13:50.724214 containerd[1629]: time="2025-12-12T18:13:50.724191373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ccfb466b6-9s5wz,Uid:38a7f512-410f-47be-bd45-a402f5067f03,Namespace:calico-system,Attempt:0,}" Dec 12 18:13:50.805261 containerd[1629]: time="2025-12-12T18:13:50.805218913Z" level=error msg="Failed to destroy network for sandbox \"c083f87a6e103438f5257cfdd17a6183a5490f7926716dc281fd298393b43867\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.810364 containerd[1629]: time="2025-12-12T18:13:50.810129063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d57dbc99-qt96x,Uid:0e30af07-2513-48c2-a9a7-015103c4abdc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c083f87a6e103438f5257cfdd17a6183a5490f7926716dc281fd298393b43867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.810515 kubelet[2802]: E1212 18:13:50.810407 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c083f87a6e103438f5257cfdd17a6183a5490f7926716dc281fd298393b43867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.810515 kubelet[2802]: E1212 18:13:50.810476 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c083f87a6e103438f5257cfdd17a6183a5490f7926716dc281fd298393b43867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67d57dbc99-qt96x" Dec 12 18:13:50.810515 kubelet[2802]: E1212 18:13:50.810499 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c083f87a6e103438f5257cfdd17a6183a5490f7926716dc281fd298393b43867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67d57dbc99-qt96x" Dec 12 18:13:50.810613 kubelet[2802]: E1212 18:13:50.810540 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67d57dbc99-qt96x_calico-system(0e30af07-2513-48c2-a9a7-015103c4abdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67d57dbc99-qt96x_calico-system(0e30af07-2513-48c2-a9a7-015103c4abdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c083f87a6e103438f5257cfdd17a6183a5490f7926716dc281fd298393b43867\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67d57dbc99-qt96x" podUID="0e30af07-2513-48c2-a9a7-015103c4abdc" Dec 12 18:13:50.830408 containerd[1629]: time="2025-12-12T18:13:50.830287553Z" level=error msg="Failed to destroy network for sandbox \"721fdc1192580570d2d2119390afd9f0186dd8c648d5f08d0e07d8c3ac12ffcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.831922 containerd[1629]: time="2025-12-12T18:13:50.831775953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zljqj,Uid:00c15d96-e0b6-4ce0-a16d-0db018401241,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"721fdc1192580570d2d2119390afd9f0186dd8c648d5f08d0e07d8c3ac12ffcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.832343 kubelet[2802]: E1212 18:13:50.832216 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"721fdc1192580570d2d2119390afd9f0186dd8c648d5f08d0e07d8c3ac12ffcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.832343 kubelet[2802]: E1212 18:13:50.832282 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"721fdc1192580570d2d2119390afd9f0186dd8c648d5f08d0e07d8c3ac12ffcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zljqj" Dec 12 18:13:50.832343 kubelet[2802]: E1212 18:13:50.832326 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"721fdc1192580570d2d2119390afd9f0186dd8c648d5f08d0e07d8c3ac12ffcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zljqj" Dec 12 18:13:50.832461 kubelet[2802]: E1212 18:13:50.832363 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zljqj_kube-system(00c15d96-e0b6-4ce0-a16d-0db018401241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zljqj_kube-system(00c15d96-e0b6-4ce0-a16d-0db018401241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"721fdc1192580570d2d2119390afd9f0186dd8c648d5f08d0e07d8c3ac12ffcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zljqj" podUID="00c15d96-e0b6-4ce0-a16d-0db018401241" Dec 12 18:13:50.849217 containerd[1629]: time="2025-12-12T18:13:50.849165553Z" level=error msg="Failed to destroy network for sandbox \"a0dcf30ec7b5340715a9f0fca8fe72f9ba9439d7fe9bf35a391ed5714b384997\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.850947 containerd[1629]: time="2025-12-12T18:13:50.850922613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbcf5f67f-bmmdp,Uid:415d4ab0-257f-4751-838c-4b86e1cd5e79,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0dcf30ec7b5340715a9f0fca8fe72f9ba9439d7fe9bf35a391ed5714b384997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.851164 kubelet[2802]: E1212 18:13:50.851103 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0dcf30ec7b5340715a9f0fca8fe72f9ba9439d7fe9bf35a391ed5714b384997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.851208 kubelet[2802]: E1212 18:13:50.851169 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0dcf30ec7b5340715a9f0fca8fe72f9ba9439d7fe9bf35a391ed5714b384997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" Dec 12 18:13:50.851208 kubelet[2802]: E1212 18:13:50.851187 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0dcf30ec7b5340715a9f0fca8fe72f9ba9439d7fe9bf35a391ed5714b384997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" Dec 12 18:13:50.851257 kubelet[2802]: E1212 18:13:50.851226 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbcf5f67f-bmmdp_calico-apiserver(415d4ab0-257f-4751-838c-4b86e1cd5e79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbcf5f67f-bmmdp_calico-apiserver(415d4ab0-257f-4751-838c-4b86e1cd5e79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0dcf30ec7b5340715a9f0fca8fe72f9ba9439d7fe9bf35a391ed5714b384997\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:13:50.859176 containerd[1629]: time="2025-12-12T18:13:50.859053383Z" level=error msg="Failed to destroy network for sandbox \"118f9496a2cb4016687d3049f48b6cbee26d8fe00b0665de9bad998839691741\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.860565 containerd[1629]: time="2025-12-12T18:13:50.860531013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbcf5f67f-dvvv5,Uid:b35a9cda-d256-490b-8223-d4936abd6ff5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"118f9496a2cb4016687d3049f48b6cbee26d8fe00b0665de9bad998839691741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.860926 kubelet[2802]: E1212 18:13:50.860896 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"118f9496a2cb4016687d3049f48b6cbee26d8fe00b0665de9bad998839691741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.860974 kubelet[2802]: E1212 18:13:50.860936 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"118f9496a2cb4016687d3049f48b6cbee26d8fe00b0665de9bad998839691741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" Dec 12 18:13:50.860974 kubelet[2802]: E1212 18:13:50.860951 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"118f9496a2cb4016687d3049f48b6cbee26d8fe00b0665de9bad998839691741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" Dec 12 18:13:50.861027 kubelet[2802]: E1212 18:13:50.860994 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbcf5f67f-dvvv5_calico-apiserver(b35a9cda-d256-490b-8223-d4936abd6ff5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbcf5f67f-dvvv5_calico-apiserver(b35a9cda-d256-490b-8223-d4936abd6ff5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"118f9496a2cb4016687d3049f48b6cbee26d8fe00b0665de9bad998839691741\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:13:50.875315 containerd[1629]: time="2025-12-12T18:13:50.875215513Z" level=error msg="Failed to destroy network for sandbox \"c101554227c22be034e3c561407f4915a95e64bb9259df7eb8713bc7a1fae0b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.876624 containerd[1629]: time="2025-12-12T18:13:50.876551043Z" level=error msg="Failed to destroy network for sandbox \"24a763f56fcbe3d3ae8190c547dfa6e108a5cc3c0bdc4008c906fa332f728d0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.878048 containerd[1629]: time="2025-12-12T18:13:50.877823863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ccfb466b6-9s5wz,Uid:38a7f512-410f-47be-bd45-a402f5067f03,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c101554227c22be034e3c561407f4915a95e64bb9259df7eb8713bc7a1fae0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.878228 kubelet[2802]: E1212 18:13:50.878187 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c101554227c22be034e3c561407f4915a95e64bb9259df7eb8713bc7a1fae0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.878273 kubelet[2802]: E1212 18:13:50.878248 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c101554227c22be034e3c561407f4915a95e64bb9259df7eb8713bc7a1fae0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" Dec 12 18:13:50.878273 kubelet[2802]: E1212 18:13:50.878265 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c101554227c22be034e3c561407f4915a95e64bb9259df7eb8713bc7a1fae0b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" Dec 12 18:13:50.878799 kubelet[2802]: E1212 18:13:50.878501 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6ccfb466b6-9s5wz_calico-system(38a7f512-410f-47be-bd45-a402f5067f03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6ccfb466b6-9s5wz_calico-system(38a7f512-410f-47be-bd45-a402f5067f03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c101554227c22be034e3c561407f4915a95e64bb9259df7eb8713bc7a1fae0b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:13:50.879188 containerd[1629]: time="2025-12-12T18:13:50.879142453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86vzh,Uid:7d6b7ada-d360-4468-8cf8-f61bde72489e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24a763f56fcbe3d3ae8190c547dfa6e108a5cc3c0bdc4008c906fa332f728d0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.883897 kubelet[2802]: E1212 18:13:50.883869 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24a763f56fcbe3d3ae8190c547dfa6e108a5cc3c0bdc4008c906fa332f728d0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:50.883963 kubelet[2802]: E1212 18:13:50.883904 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24a763f56fcbe3d3ae8190c547dfa6e108a5cc3c0bdc4008c906fa332f728d0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-86vzh" Dec 12 18:13:50.883963 kubelet[2802]: E1212 18:13:50.883920 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24a763f56fcbe3d3ae8190c547dfa6e108a5cc3c0bdc4008c906fa332f728d0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-86vzh" Dec 12 18:13:50.884975 kubelet[2802]: E1212 18:13:50.884656 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-86vzh_kube-system(7d6b7ada-d360-4468-8cf8-f61bde72489e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-86vzh_kube-system(7d6b7ada-d360-4468-8cf8-f61bde72489e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24a763f56fcbe3d3ae8190c547dfa6e108a5cc3c0bdc4008c906fa332f728d0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-86vzh" podUID="7d6b7ada-d360-4468-8cf8-f61bde72489e" Dec 12 18:13:51.488803 systemd[1]: Created slice kubepods-besteffort-podf46da395_0309_47b8_bfd7_ce69c3c79781.slice - libcontainer container kubepods-besteffort-podf46da395_0309_47b8_bfd7_ce69c3c79781.slice. Dec 12 18:13:51.492359 containerd[1629]: time="2025-12-12T18:13:51.491955973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8ggwr,Uid:f46da395-0309-47b8-bfd7-ce69c3c79781,Namespace:calico-system,Attempt:0,}" Dec 12 18:13:51.542141 kubelet[2802]: E1212 18:13:51.542106 2802 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Dec 12 18:13:51.545191 kubelet[2802]: E1212 18:13:51.543274 2802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c713cd34-08f7-480c-b91d-bedb3b68bb36-goldmane-key-pair podName:c713cd34-08f7-480c-b91d-bedb3b68bb36 nodeName:}" failed. No retries permitted until 2025-12-12 18:13:52.043230043 +0000 UTC m=+26.680248351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/c713cd34-08f7-480c-b91d-bedb3b68bb36-goldmane-key-pair") pod "goldmane-666569f655-86pl8" (UID: "c713cd34-08f7-480c-b91d-bedb3b68bb36") : failed to sync secret cache: timed out waiting for the condition Dec 12 18:13:51.548375 kubelet[2802]: E1212 18:13:51.548359 2802 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 12 18:13:51.548596 kubelet[2802]: E1212 18:13:51.548570 2802 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c713cd34-08f7-480c-b91d-bedb3b68bb36-goldmane-ca-bundle podName:c713cd34-08f7-480c-b91d-bedb3b68bb36 nodeName:}" failed. No retries permitted until 2025-12-12 18:13:52.048539493 +0000 UTC m=+26.685557801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/c713cd34-08f7-480c-b91d-bedb3b68bb36-goldmane-ca-bundle") pod "goldmane-666569f655-86pl8" (UID: "c713cd34-08f7-480c-b91d-bedb3b68bb36") : failed to sync configmap cache: timed out waiting for the condition Dec 12 18:13:51.581422 containerd[1629]: time="2025-12-12T18:13:51.581344153Z" level=error msg="Failed to destroy network for sandbox \"4d41c928f3dce9fa13fa0124b27f226b7b7f3ea71de30dadc5e09df3ec8eaff5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:51.585497 systemd[1]: run-netns-cni\x2d4bcbac5a\x2d71a3\x2dc470\x2d5ebf\x2d267cb372c517.mount: Deactivated successfully. Dec 12 18:13:51.586041 containerd[1629]: time="2025-12-12T18:13:51.585687023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8ggwr,Uid:f46da395-0309-47b8-bfd7-ce69c3c79781,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d41c928f3dce9fa13fa0124b27f226b7b7f3ea71de30dadc5e09df3ec8eaff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:51.586121 kubelet[2802]: E1212 18:13:51.586094 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d41c928f3dce9fa13fa0124b27f226b7b7f3ea71de30dadc5e09df3ec8eaff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:51.586161 kubelet[2802]: E1212 18:13:51.586146 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d41c928f3dce9fa13fa0124b27f226b7b7f3ea71de30dadc5e09df3ec8eaff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8ggwr" Dec 12 18:13:51.586189 kubelet[2802]: E1212 18:13:51.586165 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d41c928f3dce9fa13fa0124b27f226b7b7f3ea71de30dadc5e09df3ec8eaff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8ggwr" Dec 12 18:13:51.586218 kubelet[2802]: E1212 18:13:51.586198 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8ggwr_calico-system(f46da395-0309-47b8-bfd7-ce69c3c79781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8ggwr_calico-system(f46da395-0309-47b8-bfd7-ce69c3c79781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d41c928f3dce9fa13fa0124b27f226b7b7f3ea71de30dadc5e09df3ec8eaff5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:13:52.212267 containerd[1629]: time="2025-12-12T18:13:52.212193203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-86pl8,Uid:c713cd34-08f7-480c-b91d-bedb3b68bb36,Namespace:calico-system,Attempt:0,}" Dec 12 18:13:52.288077 containerd[1629]: time="2025-12-12T18:13:52.288026123Z" level=error msg="Failed to destroy network for sandbox \"53f5c5358fe2f115dee4d820c31170f59b0da280fcb4ed0a7cc76aa65c0a0466\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:52.291142 containerd[1629]: time="2025-12-12T18:13:52.291111773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-86pl8,Uid:c713cd34-08f7-480c-b91d-bedb3b68bb36,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53f5c5358fe2f115dee4d820c31170f59b0da280fcb4ed0a7cc76aa65c0a0466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:52.291369 kubelet[2802]: E1212 18:13:52.291288 2802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53f5c5358fe2f115dee4d820c31170f59b0da280fcb4ed0a7cc76aa65c0a0466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:13:52.291673 kubelet[2802]: E1212 18:13:52.291381 2802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53f5c5358fe2f115dee4d820c31170f59b0da280fcb4ed0a7cc76aa65c0a0466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-86pl8" Dec 12 18:13:52.291673 kubelet[2802]: E1212 18:13:52.291405 2802 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53f5c5358fe2f115dee4d820c31170f59b0da280fcb4ed0a7cc76aa65c0a0466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-86pl8" Dec 12 18:13:52.291673 kubelet[2802]: E1212 18:13:52.291440 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-86pl8_calico-system(c713cd34-08f7-480c-b91d-bedb3b68bb36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-86pl8_calico-system(c713cd34-08f7-480c-b91d-bedb3b68bb36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53f5c5358fe2f115dee4d820c31170f59b0da280fcb4ed0a7cc76aa65c0a0466\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:13:52.560124 systemd[1]: run-netns-cni\x2d485e7e47\x2d3ebb\x2d0d71\x2dccb3\x2d06c73d4d270f.mount: Deactivated successfully. Dec 12 18:13:54.292089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192242466.mount: Deactivated successfully. Dec 12 18:13:54.317045 containerd[1629]: time="2025-12-12T18:13:54.316990893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:54.317889 containerd[1629]: time="2025-12-12T18:13:54.317789303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 12 18:13:54.318449 containerd[1629]: time="2025-12-12T18:13:54.318417763Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:54.320420 containerd[1629]: time="2025-12-12T18:13:54.319893513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:13:54.320420 containerd[1629]: time="2025-12-12T18:13:54.320312373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.70221658s" Dec 12 18:13:54.320420 containerd[1629]: time="2025-12-12T18:13:54.320344913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:13:54.337846 containerd[1629]: time="2025-12-12T18:13:54.337824013Z" level=info msg="CreateContainer within sandbox \"7d4859e1e39943190953ec64f6d8cb484644f48056c2fe4aa50cba8750b2d02a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:13:54.347569 containerd[1629]: time="2025-12-12T18:13:54.347547353Z" level=info msg="Container bbec7ee327379a51c4911a8d83ece5db68caec941a1cf90f331aad2bf40c21de: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:13:54.362386 containerd[1629]: time="2025-12-12T18:13:54.362353973Z" level=info msg="CreateContainer within sandbox \"7d4859e1e39943190953ec64f6d8cb484644f48056c2fe4aa50cba8750b2d02a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bbec7ee327379a51c4911a8d83ece5db68caec941a1cf90f331aad2bf40c21de\"" Dec 12 18:13:54.362772 containerd[1629]: time="2025-12-12T18:13:54.362751173Z" level=info msg="StartContainer for \"bbec7ee327379a51c4911a8d83ece5db68caec941a1cf90f331aad2bf40c21de\"" Dec 12 18:13:54.363904 containerd[1629]: time="2025-12-12T18:13:54.363884863Z" level=info msg="connecting to shim bbec7ee327379a51c4911a8d83ece5db68caec941a1cf90f331aad2bf40c21de" address="unix:///run/containerd/s/b9a9d609c2d50ba11a9523b4078e8beba83dbe773372ac3aa83684f61f45df29" protocol=ttrpc version=3 Dec 12 18:13:54.416589 systemd[1]: Started cri-containerd-bbec7ee327379a51c4911a8d83ece5db68caec941a1cf90f331aad2bf40c21de.scope - libcontainer container bbec7ee327379a51c4911a8d83ece5db68caec941a1cf90f331aad2bf40c21de. Dec 12 18:13:54.487000 audit: BPF prog-id=184 op=LOAD Dec 12 18:13:54.487000 audit[3771]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3327 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:54.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262656337656533323733373961353163343931316138643833656365 Dec 12 18:13:54.487000 audit: BPF prog-id=185 op=LOAD Dec 12 18:13:54.487000 audit[3771]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3327 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:54.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262656337656533323733373961353163343931316138643833656365 Dec 12 18:13:54.487000 audit: BPF prog-id=185 op=UNLOAD Dec 12 18:13:54.487000 audit[3771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3327 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:54.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262656337656533323733373961353163343931316138643833656365 Dec 12 18:13:54.487000 audit: BPF prog-id=184 op=UNLOAD Dec 12 18:13:54.487000 audit[3771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3327 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:54.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262656337656533323733373961353163343931316138643833656365 Dec 12 18:13:54.487000 audit: BPF prog-id=186 op=LOAD Dec 12 18:13:54.487000 audit[3771]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3327 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:54.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262656337656533323733373961353163343931316138643833656365 Dec 12 18:13:54.510336 containerd[1629]: time="2025-12-12T18:13:54.510261663Z" level=info msg="StartContainer for \"bbec7ee327379a51c4911a8d83ece5db68caec941a1cf90f331aad2bf40c21de\" returns successfully" Dec 12 18:13:54.599079 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:13:54.599213 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:13:54.638109 kubelet[2802]: E1212 18:13:54.638042 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:54.651766 kubelet[2802]: I1212 18:13:54.651654 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xbhkd" podStartSLOduration=1.667038933 podStartE2EDuration="11.651624023s" podCreationTimestamp="2025-12-12 18:13:43 +0000 UTC" firstStartedPulling="2025-12-12 18:13:44.336734153 +0000 UTC m=+18.973752461" lastFinishedPulling="2025-12-12 18:13:54.321319243 +0000 UTC m=+28.958337551" observedRunningTime="2025-12-12 18:13:54.649466893 +0000 UTC m=+29.286485211" watchObservedRunningTime="2025-12-12 18:13:54.651624023 +0000 UTC m=+29.288642331" Dec 12 18:13:54.874096 kubelet[2802]: I1212 18:13:54.874060 2802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e30af07-2513-48c2-a9a7-015103c4abdc-whisker-backend-key-pair\") pod \"0e30af07-2513-48c2-a9a7-015103c4abdc\" (UID: \"0e30af07-2513-48c2-a9a7-015103c4abdc\") " Dec 12 18:13:54.874096 kubelet[2802]: I1212 18:13:54.874100 2802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdrlh\" (UniqueName: \"kubernetes.io/projected/0e30af07-2513-48c2-a9a7-015103c4abdc-kube-api-access-pdrlh\") pod \"0e30af07-2513-48c2-a9a7-015103c4abdc\" (UID: \"0e30af07-2513-48c2-a9a7-015103c4abdc\") " Dec 12 18:13:54.874321 kubelet[2802]: I1212 18:13:54.874123 2802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e30af07-2513-48c2-a9a7-015103c4abdc-whisker-ca-bundle\") pod \"0e30af07-2513-48c2-a9a7-015103c4abdc\" (UID: \"0e30af07-2513-48c2-a9a7-015103c4abdc\") " Dec 12 18:13:54.874594 kubelet[2802]: I1212 18:13:54.874568 2802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e30af07-2513-48c2-a9a7-015103c4abdc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0e30af07-2513-48c2-a9a7-015103c4abdc" (UID: "0e30af07-2513-48c2-a9a7-015103c4abdc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:13:54.879911 kubelet[2802]: I1212 18:13:54.879889 2802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e30af07-2513-48c2-a9a7-015103c4abdc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0e30af07-2513-48c2-a9a7-015103c4abdc" (UID: "0e30af07-2513-48c2-a9a7-015103c4abdc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:13:54.880011 kubelet[2802]: I1212 18:13:54.879901 2802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e30af07-2513-48c2-a9a7-015103c4abdc-kube-api-access-pdrlh" (OuterVolumeSpecName: "kube-api-access-pdrlh") pod "0e30af07-2513-48c2-a9a7-015103c4abdc" (UID: "0e30af07-2513-48c2-a9a7-015103c4abdc"). InnerVolumeSpecName "kube-api-access-pdrlh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:13:54.974471 kubelet[2802]: I1212 18:13:54.974388 2802 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e30af07-2513-48c2-a9a7-015103c4abdc-whisker-backend-key-pair\") on node \"172-234-28-21\" DevicePath \"\"" Dec 12 18:13:54.974471 kubelet[2802]: I1212 18:13:54.974410 2802 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pdrlh\" (UniqueName: \"kubernetes.io/projected/0e30af07-2513-48c2-a9a7-015103c4abdc-kube-api-access-pdrlh\") on node \"172-234-28-21\" DevicePath \"\"" Dec 12 18:13:54.974471 kubelet[2802]: I1212 18:13:54.974439 2802 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e30af07-2513-48c2-a9a7-015103c4abdc-whisker-ca-bundle\") on node \"172-234-28-21\" DevicePath \"\"" Dec 12 18:13:55.295495 systemd[1]: var-lib-kubelet-pods-0e30af07\x2d2513\x2d48c2\x2da9a7\x2d015103c4abdc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpdrlh.mount: Deactivated successfully. Dec 12 18:13:55.295601 systemd[1]: var-lib-kubelet-pods-0e30af07\x2d2513\x2d48c2\x2da9a7\x2d015103c4abdc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:13:55.486860 systemd[1]: Removed slice kubepods-besteffort-pod0e30af07_2513_48c2_a9a7_015103c4abdc.slice - libcontainer container kubepods-besteffort-pod0e30af07_2513_48c2_a9a7_015103c4abdc.slice. Dec 12 18:13:55.638867 kubelet[2802]: I1212 18:13:55.638141 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:13:55.638867 kubelet[2802]: E1212 18:13:55.638576 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:13:55.710360 systemd[1]: Created slice kubepods-besteffort-podf5d7327f_3d2b_4ade_8746_8210d015da61.slice - libcontainer container kubepods-besteffort-podf5d7327f_3d2b_4ade_8746_8210d015da61.slice. Dec 12 18:13:55.780157 kubelet[2802]: I1212 18:13:55.779640 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5d7327f-3d2b-4ade-8746-8210d015da61-whisker-ca-bundle\") pod \"whisker-6d7c5d55f9-7mw7q\" (UID: \"f5d7327f-3d2b-4ade-8746-8210d015da61\") " pod="calico-system/whisker-6d7c5d55f9-7mw7q" Dec 12 18:13:55.780157 kubelet[2802]: I1212 18:13:55.779726 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5d7327f-3d2b-4ade-8746-8210d015da61-whisker-backend-key-pair\") pod \"whisker-6d7c5d55f9-7mw7q\" (UID: \"f5d7327f-3d2b-4ade-8746-8210d015da61\") " pod="calico-system/whisker-6d7c5d55f9-7mw7q" Dec 12 18:13:55.780157 kubelet[2802]: I1212 18:13:55.779746 2802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdd5z\" (UniqueName: \"kubernetes.io/projected/f5d7327f-3d2b-4ade-8746-8210d015da61-kube-api-access-rdd5z\") pod \"whisker-6d7c5d55f9-7mw7q\" (UID: \"f5d7327f-3d2b-4ade-8746-8210d015da61\") " pod="calico-system/whisker-6d7c5d55f9-7mw7q" Dec 12 18:13:56.017190 containerd[1629]: time="2025-12-12T18:13:56.017088563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d7c5d55f9-7mw7q,Uid:f5d7327f-3d2b-4ade-8746-8210d015da61,Namespace:calico-system,Attempt:0,}" Dec 12 18:13:56.216772 systemd-networkd[1530]: cali463f004b2ec: Link UP Dec 12 18:13:56.220721 systemd-networkd[1530]: cali463f004b2ec: Gained carrier Dec 12 18:13:56.239693 containerd[1629]: 2025-12-12 18:13:56.059 [INFO][3924] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:13:56.239693 containerd[1629]: 2025-12-12 18:13:56.102 [INFO][3924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0 whisker-6d7c5d55f9- calico-system f5d7327f-3d2b-4ade-8746-8210d015da61 876 0 2025-12-12 18:13:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d7c5d55f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-234-28-21 whisker-6d7c5d55f9-7mw7q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali463f004b2ec [] [] }} ContainerID="0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" Namespace="calico-system" Pod="whisker-6d7c5d55f9-7mw7q" WorkloadEndpoint="172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-" Dec 12 18:13:56.239693 containerd[1629]: 2025-12-12 18:13:56.102 [INFO][3924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" Namespace="calico-system" Pod="whisker-6d7c5d55f9-7mw7q" WorkloadEndpoint="172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0" Dec 12 18:13:56.239693 containerd[1629]: 2025-12-12 18:13:56.157 [INFO][3939] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" HandleID="k8s-pod-network.0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" Workload="172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0" Dec 12 18:13:56.240155 containerd[1629]: 2025-12-12 18:13:56.157 [INFO][3939] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" HandleID="k8s-pod-network.0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" Workload="172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d810), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-28-21", "pod":"whisker-6d7c5d55f9-7mw7q", "timestamp":"2025-12-12 18:13:56.157214243 +0000 UTC"}, Hostname:"172-234-28-21", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:13:56.240155 containerd[1629]: 2025-12-12 18:13:56.157 [INFO][3939] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:13:56.240155 containerd[1629]: 2025-12-12 18:13:56.157 [INFO][3939] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:13:56.240155 containerd[1629]: 2025-12-12 18:13:56.157 [INFO][3939] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-28-21' Dec 12 18:13:56.240155 containerd[1629]: 2025-12-12 18:13:56.166 [INFO][3939] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" host="172-234-28-21" Dec 12 18:13:56.240155 containerd[1629]: 2025-12-12 18:13:56.170 [INFO][3939] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-28-21" Dec 12 18:13:56.240155 containerd[1629]: 2025-12-12 18:13:56.174 [INFO][3939] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="172-234-28-21" Dec 12 18:13:56.240155 containerd[1629]: 2025-12-12 18:13:56.176 [INFO][3939] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:13:56.240155 containerd[1629]: 2025-12-12 18:13:56.178 [INFO][3939] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:13:56.240155 containerd[1629]: 2025-12-12 18:13:56.178 [INFO][3939] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" host="172-234-28-21" Dec 12 18:13:56.240605 containerd[1629]: 2025-12-12 18:13:56.181 [INFO][3939] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358 Dec 12 18:13:56.240605 containerd[1629]: 2025-12-12 18:13:56.185 [INFO][3939] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" host="172-234-28-21" Dec 12 18:13:56.240605 containerd[1629]: 2025-12-12 18:13:56.191 [INFO][3939] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.1/26] block=192.168.42.0/26 handle="k8s-pod-network.0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" host="172-234-28-21" Dec 12 18:13:56.240605 containerd[1629]: 2025-12-12 18:13:56.192 [INFO][3939] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.1/26] handle="k8s-pod-network.0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" host="172-234-28-21" Dec 12 18:13:56.240605 containerd[1629]: 2025-12-12 18:13:56.192 [INFO][3939] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:13:56.240605 containerd[1629]: 2025-12-12 18:13:56.192 [INFO][3939] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.1/26] IPv6=[] ContainerID="0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" HandleID="k8s-pod-network.0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" Workload="172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0" Dec 12 18:13:56.240721 containerd[1629]: 2025-12-12 18:13:56.198 [INFO][3924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" Namespace="calico-system" Pod="whisker-6d7c5d55f9-7mw7q" WorkloadEndpoint="172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0", GenerateName:"whisker-6d7c5d55f9-", Namespace:"calico-system", SelfLink:"", UID:"f5d7327f-3d2b-4ade-8746-8210d015da61", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d7c5d55f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"", Pod:"whisker-6d7c5d55f9-7mw7q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali463f004b2ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:13:56.240721 containerd[1629]: 2025-12-12 18:13:56.198 [INFO][3924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.1/32] ContainerID="0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" Namespace="calico-system" Pod="whisker-6d7c5d55f9-7mw7q" WorkloadEndpoint="172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0" Dec 12 18:13:56.240799 containerd[1629]: 2025-12-12 18:13:56.199 [INFO][3924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali463f004b2ec ContainerID="0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" Namespace="calico-system" Pod="whisker-6d7c5d55f9-7mw7q" WorkloadEndpoint="172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0" Dec 12 18:13:56.240799 containerd[1629]: 2025-12-12 18:13:56.223 [INFO][3924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" Namespace="calico-system" Pod="whisker-6d7c5d55f9-7mw7q" WorkloadEndpoint="172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0" Dec 12 18:13:56.240840 containerd[1629]: 2025-12-12 18:13:56.224 [INFO][3924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" Namespace="calico-system" Pod="whisker-6d7c5d55f9-7mw7q" WorkloadEndpoint="172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0", GenerateName:"whisker-6d7c5d55f9-", Namespace:"calico-system", SelfLink:"", UID:"f5d7327f-3d2b-4ade-8746-8210d015da61", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d7c5d55f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358", Pod:"whisker-6d7c5d55f9-7mw7q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali463f004b2ec", MAC:"82:00:72:45:e0:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:13:56.240899 containerd[1629]: 2025-12-12 18:13:56.236 [INFO][3924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" Namespace="calico-system" Pod="whisker-6d7c5d55f9-7mw7q" WorkloadEndpoint="172--234--28--21-k8s-whisker--6d7c5d55f9--7mw7q-eth0" Dec 12 18:13:56.281493 containerd[1629]: time="2025-12-12T18:13:56.281270793Z" level=info msg="connecting to shim 0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358" address="unix:///run/containerd/s/88a3e44e598d23a41fdbe2abfaa9a81e164fc4c8a722f59894205178aaaada15" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:13:56.319484 systemd[1]: Started cri-containerd-0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358.scope - libcontainer container 0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358. Dec 12 18:13:56.336341 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 12 18:13:56.336436 kernel: audit: type=1334 audit(1765563236.332:592): prog-id=187 op=LOAD Dec 12 18:13:56.332000 audit: BPF prog-id=187 op=LOAD Dec 12 18:13:56.333000 audit: BPF prog-id=188 op=LOAD Dec 12 18:13:56.340543 kernel: audit: type=1334 audit(1765563236.333:593): prog-id=188 op=LOAD Dec 12 18:13:56.340599 kernel: audit: type=1300 audit(1765563236.333:593): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe238 a2=98 a3=0 items=0 ppid=3965 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:56.333000 audit[3976]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe238 a2=98 a3=0 items=0 ppid=3965 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:56.355448 kernel: audit: type=1327 audit(1765563236.333:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306461343362383038633536373562613435666263386561663037 Dec 12 18:13:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306461343362383038633536373562613435666263386561663037 Dec 12 18:13:56.357753 kernel: audit: type=1334 audit(1765563236.333:594): prog-id=188 op=UNLOAD Dec 12 18:13:56.333000 audit: BPF prog-id=188 op=UNLOAD Dec 12 18:13:56.366207 kernel: audit: type=1300 audit(1765563236.333:594): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3965 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:56.333000 audit[3976]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3965 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306461343362383038633536373562613435666263386561663037 Dec 12 18:13:56.375651 kernel: audit: type=1327 audit(1765563236.333:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306461343362383038633536373562613435666263386561663037 Dec 12 18:13:56.375692 kernel: audit: type=1334 audit(1765563236.333:595): prog-id=189 op=LOAD Dec 12 18:13:56.333000 audit: BPF prog-id=189 op=LOAD Dec 12 18:13:56.383248 kernel: audit: type=1300 audit(1765563236.333:595): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe488 a2=98 a3=0 items=0 ppid=3965 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:56.333000 audit[3976]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe488 a2=98 a3=0 items=0 ppid=3965 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306461343362383038633536373562613435666263386561663037 Dec 12 18:13:56.333000 audit: BPF prog-id=190 op=LOAD Dec 12 18:13:56.333000 audit[3976]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001fe218 a2=98 a3=0 items=0 ppid=3965 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306461343362383038633536373562613435666263386561663037 Dec 12 18:13:56.333000 audit: BPF prog-id=190 op=UNLOAD Dec 12 18:13:56.333000 audit[3976]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3965 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:56.392367 kernel: audit: type=1327 audit(1765563236.333:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306461343362383038633536373562613435666263386561663037 Dec 12 18:13:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306461343362383038633536373562613435666263386561663037 Dec 12 18:13:56.333000 audit: BPF prog-id=189 op=UNLOAD Dec 12 18:13:56.333000 audit[3976]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3965 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306461343362383038633536373562613435666263386561663037 Dec 12 18:13:56.333000 audit: BPF prog-id=191 op=LOAD Dec 12 18:13:56.333000 audit[3976]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fe6e8 a2=98 a3=0 items=0 ppid=3965 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:56.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306461343362383038633536373562613435666263386561663037 Dec 12 18:13:56.395702 containerd[1629]: time="2025-12-12T18:13:56.395612833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d7c5d55f9-7mw7q,Uid:f5d7327f-3d2b-4ade-8746-8210d015da61,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f0da43b808c5675ba45fbc8eaf074bba22a98aabf3974c6d0467115bf35c358\"" Dec 12 18:13:56.398402 containerd[1629]: time="2025-12-12T18:13:56.398370583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:13:56.540819 containerd[1629]: time="2025-12-12T18:13:56.540754193Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:13:56.541884 containerd[1629]: time="2025-12-12T18:13:56.541831873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:13:56.542654 containerd[1629]: time="2025-12-12T18:13:56.541855453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:13:56.542701 kubelet[2802]: E1212 18:13:56.542215 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:13:56.542701 kubelet[2802]: E1212 18:13:56.542347 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:13:56.551167 kubelet[2802]: E1212 18:13:56.551031 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3f4f98888c449589a809d6f0e403cbb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rdd5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d7c5d55f9-7mw7q_calico-system(f5d7327f-3d2b-4ade-8746-8210d015da61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:13:56.554401 containerd[1629]: time="2025-12-12T18:13:56.554376423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:13:56.681368 containerd[1629]: time="2025-12-12T18:13:56.681275593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:13:56.682696 containerd[1629]: time="2025-12-12T18:13:56.682540243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:13:56.682696 containerd[1629]: time="2025-12-12T18:13:56.682635463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:13:56.683823 kubelet[2802]: E1212 18:13:56.683144 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:13:56.683823 kubelet[2802]: E1212 18:13:56.683688 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:13:56.684387 kubelet[2802]: E1212 18:13:56.683956 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdd5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d7c5d55f9-7mw7q_calico-system(f5d7327f-3d2b-4ade-8746-8210d015da61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:13:56.686243 kubelet[2802]: E1212 18:13:56.686170 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d7c5d55f9-7mw7q" podUID="f5d7327f-3d2b-4ade-8746-8210d015da61" Dec 12 18:13:57.481320 kubelet[2802]: I1212 18:13:57.481244 2802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e30af07-2513-48c2-a9a7-015103c4abdc" path="/var/lib/kubelet/pods/0e30af07-2513-48c2-a9a7-015103c4abdc/volumes" Dec 12 18:13:57.647664 kubelet[2802]: E1212 18:13:57.647605 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d7c5d55f9-7mw7q" podUID="f5d7327f-3d2b-4ade-8746-8210d015da61" Dec 12 18:13:57.673000 audit[4025]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:57.673000 audit[4025]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffda2ba4740 a2=0 a3=7ffda2ba472c items=0 ppid=2909 pid=4025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:57.673000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:57.680000 audit[4025]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:13:57.680000 audit[4025]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda2ba4740 a2=0 a3=0 items=0 ppid=2909 pid=4025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:13:57.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:13:58.300496 systemd-networkd[1530]: cali463f004b2ec: Gained IPv6LL Dec 12 18:14:02.478792 kubelet[2802]: E1212 18:14:02.478631 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:02.480443 containerd[1629]: time="2025-12-12T18:14:02.479881423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbcf5f67f-dvvv5,Uid:b35a9cda-d256-490b-8223-d4936abd6ff5,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:14:02.481222 containerd[1629]: time="2025-12-12T18:14:02.480015023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86vzh,Uid:7d6b7ada-d360-4468-8cf8-f61bde72489e,Namespace:kube-system,Attempt:0,}" Dec 12 18:14:02.606397 systemd-networkd[1530]: cali62fbf5bacd8: Link UP Dec 12 18:14:02.607014 systemd-networkd[1530]: cali62fbf5bacd8: Gained carrier Dec 12 18:14:02.615387 containerd[1629]: 2025-12-12 18:14:02.519 [INFO][4125] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:14:02.615387 containerd[1629]: 2025-12-12 18:14:02.531 [INFO][4125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0 calico-apiserver-6cbcf5f67f- calico-apiserver b35a9cda-d256-490b-8223-d4936abd6ff5 808 0 2025-12-12 18:13:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cbcf5f67f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-28-21 calico-apiserver-6cbcf5f67f-dvvv5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali62fbf5bacd8 [] [] }} ContainerID="d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-dvvv5" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-" Dec 12 18:14:02.615387 containerd[1629]: 2025-12-12 18:14:02.531 [INFO][4125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-dvvv5" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0" Dec 12 18:14:02.615387 containerd[1629]: 2025-12-12 18:14:02.564 [INFO][4147] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" HandleID="k8s-pod-network.d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" Workload="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0" Dec 12 18:14:02.615578 containerd[1629]: 2025-12-12 18:14:02.565 [INFO][4147] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" HandleID="k8s-pod-network.d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" Workload="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-28-21", "pod":"calico-apiserver-6cbcf5f67f-dvvv5", "timestamp":"2025-12-12 18:14:02.564872613 +0000 UTC"}, Hostname:"172-234-28-21", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:14:02.615578 containerd[1629]: 2025-12-12 18:14:02.565 [INFO][4147] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:14:02.615578 containerd[1629]: 2025-12-12 18:14:02.565 [INFO][4147] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:14:02.615578 containerd[1629]: 2025-12-12 18:14:02.565 [INFO][4147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-28-21' Dec 12 18:14:02.615578 containerd[1629]: 2025-12-12 18:14:02.572 [INFO][4147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" host="172-234-28-21" Dec 12 18:14:02.615578 containerd[1629]: 2025-12-12 18:14:02.579 [INFO][4147] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-28-21" Dec 12 18:14:02.615578 containerd[1629]: 2025-12-12 18:14:02.582 [INFO][4147] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:02.615578 containerd[1629]: 2025-12-12 18:14:02.584 [INFO][4147] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:02.615578 containerd[1629]: 2025-12-12 18:14:02.586 [INFO][4147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:02.615578 containerd[1629]: 2025-12-12 18:14:02.586 [INFO][4147] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" host="172-234-28-21" Dec 12 18:14:02.615769 containerd[1629]: 2025-12-12 18:14:02.587 [INFO][4147] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21 Dec 12 18:14:02.615769 containerd[1629]: 2025-12-12 18:14:02.591 [INFO][4147] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" host="172-234-28-21" Dec 12 18:14:02.615769 containerd[1629]: 2025-12-12 18:14:02.596 [INFO][4147] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.2/26] block=192.168.42.0/26 handle="k8s-pod-network.d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" host="172-234-28-21" Dec 12 18:14:02.615769 containerd[1629]: 2025-12-12 18:14:02.596 [INFO][4147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.2/26] handle="k8s-pod-network.d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" host="172-234-28-21" Dec 12 18:14:02.615769 containerd[1629]: 2025-12-12 18:14:02.597 [INFO][4147] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:14:02.615769 containerd[1629]: 2025-12-12 18:14:02.597 [INFO][4147] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.2/26] IPv6=[] ContainerID="d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" HandleID="k8s-pod-network.d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" Workload="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0" Dec 12 18:14:02.615881 containerd[1629]: 2025-12-12 18:14:02.601 [INFO][4125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-dvvv5" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0", GenerateName:"calico-apiserver-6cbcf5f67f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b35a9cda-d256-490b-8223-d4936abd6ff5", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbcf5f67f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"", Pod:"calico-apiserver-6cbcf5f67f-dvvv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62fbf5bacd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:02.615935 containerd[1629]: 2025-12-12 18:14:02.602 [INFO][4125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.2/32] ContainerID="d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-dvvv5" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0" Dec 12 18:14:02.615935 containerd[1629]: 2025-12-12 18:14:02.602 [INFO][4125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62fbf5bacd8 ContainerID="d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-dvvv5" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0" Dec 12 18:14:02.615935 containerd[1629]: 2025-12-12 18:14:02.604 [INFO][4125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-dvvv5" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0" Dec 12 18:14:02.615993 containerd[1629]: 2025-12-12 18:14:02.605 [INFO][4125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-dvvv5" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0", GenerateName:"calico-apiserver-6cbcf5f67f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b35a9cda-d256-490b-8223-d4936abd6ff5", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbcf5f67f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21", Pod:"calico-apiserver-6cbcf5f67f-dvvv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62fbf5bacd8", MAC:"92:f6:76:1b:45:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:02.616046 containerd[1629]: 2025-12-12 18:14:02.611 [INFO][4125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-dvvv5" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--dvvv5-eth0" Dec 12 18:14:02.651599 containerd[1629]: time="2025-12-12T18:14:02.651521593Z" level=info msg="connecting to shim d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21" address="unix:///run/containerd/s/c5f1555be369b5188a143f6e5a65d5c5cfa1da32ca55546a4a0a8af99944b9aa" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:14:02.687792 systemd[1]: Started cri-containerd-d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21.scope - libcontainer container d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21. Dec 12 18:14:02.715345 systemd-networkd[1530]: cali80943f8dace: Link UP Dec 12 18:14:02.715682 systemd-networkd[1530]: cali80943f8dace: Gained carrier Dec 12 18:14:02.726391 containerd[1629]: 2025-12-12 18:14:02.520 [INFO][4119] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:14:02.726391 containerd[1629]: 2025-12-12 18:14:02.531 [INFO][4119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0 coredns-668d6bf9bc- kube-system 7d6b7ada-d360-4468-8cf8-f61bde72489e 802 0 2025-12-12 18:13:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-28-21 coredns-668d6bf9bc-86vzh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali80943f8dace [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-86vzh" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-" Dec 12 18:14:02.726391 containerd[1629]: 2025-12-12 18:14:02.531 [INFO][4119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-86vzh" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0" Dec 12 18:14:02.726391 containerd[1629]: 2025-12-12 18:14:02.580 [INFO][4145] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" HandleID="k8s-pod-network.80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" Workload="172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0" Dec 12 18:14:02.726567 containerd[1629]: 2025-12-12 18:14:02.581 [INFO][4145] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" HandleID="k8s-pod-network.80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" Workload="172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd640), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-28-21", "pod":"coredns-668d6bf9bc-86vzh", "timestamp":"2025-12-12 18:14:02.580788013 +0000 UTC"}, Hostname:"172-234-28-21", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:14:02.726567 containerd[1629]: 2025-12-12 18:14:02.581 [INFO][4145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:14:02.726567 containerd[1629]: 2025-12-12 18:14:02.597 [INFO][4145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:14:02.726567 containerd[1629]: 2025-12-12 18:14:02.597 [INFO][4145] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-28-21' Dec 12 18:14:02.726567 containerd[1629]: 2025-12-12 18:14:02.674 [INFO][4145] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" host="172-234-28-21" Dec 12 18:14:02.726567 containerd[1629]: 2025-12-12 18:14:02.681 [INFO][4145] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-28-21" Dec 12 18:14:02.726567 containerd[1629]: 2025-12-12 18:14:02.686 [INFO][4145] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:02.726567 containerd[1629]: 2025-12-12 18:14:02.688 [INFO][4145] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:02.726567 containerd[1629]: 2025-12-12 18:14:02.692 [INFO][4145] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:02.726567 containerd[1629]: 2025-12-12 18:14:02.692 [INFO][4145] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" host="172-234-28-21" Dec 12 18:14:02.726810 containerd[1629]: 2025-12-12 18:14:02.694 [INFO][4145] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef Dec 12 18:14:02.726810 containerd[1629]: 2025-12-12 18:14:02.700 [INFO][4145] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" host="172-234-28-21" Dec 12 18:14:02.726810 containerd[1629]: 2025-12-12 18:14:02.705 [INFO][4145] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.3/26] block=192.168.42.0/26 handle="k8s-pod-network.80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" host="172-234-28-21" Dec 12 18:14:02.726810 containerd[1629]: 2025-12-12 18:14:02.705 [INFO][4145] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.3/26] handle="k8s-pod-network.80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" host="172-234-28-21" Dec 12 18:14:02.726810 containerd[1629]: 2025-12-12 18:14:02.705 [INFO][4145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:14:02.726810 containerd[1629]: 2025-12-12 18:14:02.705 [INFO][4145] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.3/26] IPv6=[] ContainerID="80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" HandleID="k8s-pod-network.80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" Workload="172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0" Dec 12 18:14:02.726925 containerd[1629]: 2025-12-12 18:14:02.708 [INFO][4119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-86vzh" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d6b7ada-d360-4468-8cf8-f61bde72489e", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"", Pod:"coredns-668d6bf9bc-86vzh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80943f8dace", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:02.726925 containerd[1629]: 2025-12-12 18:14:02.708 [INFO][4119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.3/32] ContainerID="80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-86vzh" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0" Dec 12 18:14:02.726925 containerd[1629]: 2025-12-12 18:14:02.708 [INFO][4119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80943f8dace ContainerID="80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-86vzh" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0" Dec 12 18:14:02.726925 containerd[1629]: 2025-12-12 18:14:02.713 [INFO][4119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-86vzh" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0" Dec 12 18:14:02.726925 containerd[1629]: 2025-12-12 18:14:02.714 [INFO][4119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-86vzh" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d6b7ada-d360-4468-8cf8-f61bde72489e", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef", Pod:"coredns-668d6bf9bc-86vzh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80943f8dace", MAC:"42:1f:a5:84:09:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:02.726925 containerd[1629]: 2025-12-12 18:14:02.721 [INFO][4119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-86vzh" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--86vzh-eth0" Dec 12 18:14:02.739594 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 12 18:14:02.739669 kernel: audit: type=1334 audit(1765563242.733:602): prog-id=192 op=LOAD Dec 12 18:14:02.733000 audit: BPF prog-id=192 op=LOAD Dec 12 18:14:02.733000 audit: BPF prog-id=193 op=LOAD Dec 12 18:14:02.749446 kernel: audit: type=1334 audit(1765563242.733:603): prog-id=193 op=LOAD Dec 12 18:14:02.751453 containerd[1629]: time="2025-12-12T18:14:02.751402243Z" level=info msg="connecting to shim 80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef" address="unix:///run/containerd/s/ff6fe5e14f9824d85ceeea8cf871c8811d93299860b48e99bba79011de0f6b38" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:14:02.761358 kernel: audit: type=1300 audit(1765563242.733:603): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.733000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438656639666363376164333866316265633536613330373665373665 Dec 12 18:14:02.773463 kernel: audit: type=1327 audit(1765563242.733:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438656639666363376164333866316265633536613330373665373665 Dec 12 18:14:02.780376 kernel: audit: type=1334 audit(1765563242.733:604): prog-id=193 op=UNLOAD Dec 12 18:14:02.733000 audit: BPF prog-id=193 op=UNLOAD Dec 12 18:14:02.733000 audit[4197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.791029 kernel: audit: type=1300 audit(1765563242.733:604): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438656639666363376164333866316265633536613330373665373665 Dec 12 18:14:02.795454 systemd[1]: Started cri-containerd-80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef.scope - libcontainer container 80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef. Dec 12 18:14:02.804677 kernel: audit: type=1327 audit(1765563242.733:604): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438656639666363376164333866316265633536613330373665373665 Dec 12 18:14:02.733000 audit: BPF prog-id=194 op=LOAD Dec 12 18:14:02.808613 kernel: audit: type=1334 audit(1765563242.733:605): prog-id=194 op=LOAD Dec 12 18:14:02.733000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.824369 kernel: audit: type=1300 audit(1765563242.733:605): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.824474 kernel: audit: type=1327 audit(1765563242.733:605): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438656639666363376164333866316265633536613330373665373665 Dec 12 18:14:02.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438656639666363376164333866316265633536613330373665373665 Dec 12 18:14:02.738000 audit: BPF prog-id=195 op=LOAD Dec 12 18:14:02.738000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438656639666363376164333866316265633536613330373665373665 Dec 12 18:14:02.738000 audit: BPF prog-id=195 op=UNLOAD Dec 12 18:14:02.738000 audit[4197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438656639666363376164333866316265633536613330373665373665 Dec 12 18:14:02.738000 audit: BPF prog-id=194 op=UNLOAD Dec 12 18:14:02.738000 audit[4197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438656639666363376164333866316265633536613330373665373665 Dec 12 18:14:02.738000 audit: BPF prog-id=196 op=LOAD Dec 12 18:14:02.738000 audit[4197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4186 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438656639666363376164333866316265633536613330373665373665 Dec 12 18:14:02.813000 audit: BPF prog-id=197 op=LOAD Dec 12 18:14:02.814000 audit: BPF prog-id=198 op=LOAD Dec 12 18:14:02.814000 audit[4243]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4233 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830333037663336353063626338366330366163393438366563636665 Dec 12 18:14:02.814000 audit: BPF prog-id=198 op=UNLOAD Dec 12 18:14:02.814000 audit[4243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4233 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830333037663336353063626338366330366163393438366563636665 Dec 12 18:14:02.815000 audit: BPF prog-id=199 op=LOAD Dec 12 18:14:02.815000 audit[4243]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4233 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830333037663336353063626338366330366163393438366563636665 Dec 12 18:14:02.815000 audit: BPF prog-id=200 op=LOAD Dec 12 18:14:02.815000 audit[4243]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4233 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830333037663336353063626338366330366163393438366563636665 Dec 12 18:14:02.815000 audit: BPF prog-id=200 op=UNLOAD Dec 12 18:14:02.815000 audit[4243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4233 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830333037663336353063626338366330366163393438366563636665 Dec 12 18:14:02.815000 audit: BPF prog-id=199 op=UNLOAD Dec 12 18:14:02.815000 audit[4243]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4233 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830333037663336353063626338366330366163393438366563636665 Dec 12 18:14:02.815000 audit: BPF prog-id=201 op=LOAD Dec 12 18:14:02.815000 audit[4243]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4233 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830333037663336353063626338366330366163393438366563636665 Dec 12 18:14:02.858203 containerd[1629]: time="2025-12-12T18:14:02.858128613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbcf5f67f-dvvv5,Uid:b35a9cda-d256-490b-8223-d4936abd6ff5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d8ef9fcc7ad38f1bec56a3076e76ed1c82e0dae91c12d8d6cd9af5d3c7f9ff21\"" Dec 12 18:14:02.861490 containerd[1629]: time="2025-12-12T18:14:02.860877143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:14:02.874638 containerd[1629]: time="2025-12-12T18:14:02.874603453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86vzh,Uid:7d6b7ada-d360-4468-8cf8-f61bde72489e,Namespace:kube-system,Attempt:0,} returns sandbox id \"80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef\"" Dec 12 18:14:02.876426 kubelet[2802]: E1212 18:14:02.876283 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:02.878747 containerd[1629]: time="2025-12-12T18:14:02.878705893Z" level=info msg="CreateContainer within sandbox \"80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:14:02.886075 containerd[1629]: time="2025-12-12T18:14:02.886043633Z" level=info msg="Container 806d2c0bf76a57fd10bc2b086c1c386c89e12b8be118d3830c63c9c003f541ef: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:14:02.889894 containerd[1629]: time="2025-12-12T18:14:02.889864063Z" level=info msg="CreateContainer within sandbox \"80307f3650cbc86c06ac9486eccfedd38bdf7c6038fc46462ab4c2499bebc7ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"806d2c0bf76a57fd10bc2b086c1c386c89e12b8be118d3830c63c9c003f541ef\"" Dec 12 18:14:02.890484 containerd[1629]: time="2025-12-12T18:14:02.890363033Z" level=info msg="StartContainer for \"806d2c0bf76a57fd10bc2b086c1c386c89e12b8be118d3830c63c9c003f541ef\"" Dec 12 18:14:02.891851 containerd[1629]: time="2025-12-12T18:14:02.891802553Z" level=info msg="connecting to shim 806d2c0bf76a57fd10bc2b086c1c386c89e12b8be118d3830c63c9c003f541ef" address="unix:///run/containerd/s/ff6fe5e14f9824d85ceeea8cf871c8811d93299860b48e99bba79011de0f6b38" protocol=ttrpc version=3 Dec 12 18:14:02.920477 systemd[1]: Started cri-containerd-806d2c0bf76a57fd10bc2b086c1c386c89e12b8be118d3830c63c9c003f541ef.scope - libcontainer container 806d2c0bf76a57fd10bc2b086c1c386c89e12b8be118d3830c63c9c003f541ef. Dec 12 18:14:02.937000 audit: BPF prog-id=202 op=LOAD Dec 12 18:14:02.937000 audit: BPF prog-id=203 op=LOAD Dec 12 18:14:02.937000 audit[4276]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4233 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830366432633062663736613537666431306263326230383663316333 Dec 12 18:14:02.938000 audit: BPF prog-id=203 op=UNLOAD Dec 12 18:14:02.938000 audit[4276]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4233 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830366432633062663736613537666431306263326230383663316333 Dec 12 18:14:02.938000 audit: BPF prog-id=204 op=LOAD Dec 12 18:14:02.938000 audit[4276]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4233 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830366432633062663736613537666431306263326230383663316333 Dec 12 18:14:02.939000 audit: BPF prog-id=205 op=LOAD Dec 12 18:14:02.939000 audit[4276]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4233 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830366432633062663736613537666431306263326230383663316333 Dec 12 18:14:02.939000 audit: BPF prog-id=205 op=UNLOAD Dec 12 18:14:02.939000 audit[4276]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4233 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830366432633062663736613537666431306263326230383663316333 Dec 12 18:14:02.941000 audit: BPF prog-id=204 op=UNLOAD Dec 12 18:14:02.941000 audit[4276]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4233 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830366432633062663736613537666431306263326230383663316333 Dec 12 18:14:02.941000 audit: BPF prog-id=206 op=LOAD Dec 12 18:14:02.941000 audit[4276]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4233 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:02.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830366432633062663736613537666431306263326230383663316333 Dec 12 18:14:02.973410 containerd[1629]: time="2025-12-12T18:14:02.973362843Z" level=info msg="StartContainer for \"806d2c0bf76a57fd10bc2b086c1c386c89e12b8be118d3830c63c9c003f541ef\" returns successfully" Dec 12 18:14:02.989898 containerd[1629]: time="2025-12-12T18:14:02.989815723Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:02.990833 containerd[1629]: time="2025-12-12T18:14:02.990786013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:14:02.991006 containerd[1629]: time="2025-12-12T18:14:02.990958373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:02.991334 kubelet[2802]: E1212 18:14:02.991239 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:14:02.991508 kubelet[2802]: E1212 18:14:02.991434 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:14:02.992249 kubelet[2802]: E1212 18:14:02.992200 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxlxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cbcf5f67f-dvvv5_calico-apiserver(b35a9cda-d256-490b-8223-d4936abd6ff5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:02.993597 kubelet[2802]: E1212 18:14:02.993542 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:14:03.660768 kubelet[2802]: E1212 18:14:03.660718 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:03.664442 kubelet[2802]: E1212 18:14:03.664371 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:14:03.682058 kubelet[2802]: I1212 18:14:03.682008 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-86vzh" podStartSLOduration=33.681995343 podStartE2EDuration="33.681995343s" podCreationTimestamp="2025-12-12 18:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:14:03.671432653 +0000 UTC m=+38.308450981" watchObservedRunningTime="2025-12-12 18:14:03.681995343 +0000 UTC m=+38.319013651" Dec 12 18:14:03.686000 audit[4317]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:03.686000 audit[4317]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc0ab543e0 a2=0 a3=7ffc0ab543cc items=0 ppid=2909 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:03.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:03.691000 audit[4317]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:03.691000 audit[4317]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc0ab543e0 a2=0 a3=0 items=0 ppid=2909 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:03.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:03.722000 audit[4325]: NETFILTER_CFG table=filter:121 family=2 entries=19 op=nft_register_rule pid=4325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:03.722000 audit[4325]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffde9e03e80 a2=0 a3=7ffde9e03e6c items=0 ppid=2909 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:03.722000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:03.726000 audit[4325]: NETFILTER_CFG table=nat:122 family=2 entries=33 op=nft_register_chain pid=4325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:03.726000 audit[4325]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffde9e03e80 a2=0 a3=7ffde9e03e6c items=0 ppid=2909 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:03.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:04.444489 systemd-networkd[1530]: cali62fbf5bacd8: Gained IPv6LL Dec 12 18:14:04.479980 kubelet[2802]: E1212 18:14:04.479874 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:04.480490 containerd[1629]: time="2025-12-12T18:14:04.480429203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-86pl8,Uid:c713cd34-08f7-480c-b91d-bedb3b68bb36,Namespace:calico-system,Attempt:0,}" Dec 12 18:14:04.481478 containerd[1629]: time="2025-12-12T18:14:04.481120563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8ggwr,Uid:f46da395-0309-47b8-bfd7-ce69c3c79781,Namespace:calico-system,Attempt:0,}" Dec 12 18:14:04.481478 containerd[1629]: time="2025-12-12T18:14:04.480433653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zljqj,Uid:00c15d96-e0b6-4ce0-a16d-0db018401241,Namespace:kube-system,Attempt:0,}" Dec 12 18:14:04.508638 systemd-networkd[1530]: cali80943f8dace: Gained IPv6LL Dec 12 18:14:04.654009 systemd-networkd[1530]: cali32f67b0d557: Link UP Dec 12 18:14:04.656216 systemd-networkd[1530]: cali32f67b0d557: Gained carrier Dec 12 18:14:04.671437 kubelet[2802]: E1212 18:14:04.670555 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:14:04.671437 kubelet[2802]: E1212 18:14:04.671269 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.539 [INFO][4336] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.563 [INFO][4336] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--28--21-k8s-csi--node--driver--8ggwr-eth0 csi-node-driver- calico-system f46da395-0309-47b8-bfd7-ce69c3c79781 703 0 2025-12-12 18:13:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-28-21 csi-node-driver-8ggwr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali32f67b0d557 [] [] }} ContainerID="106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" Namespace="calico-system" Pod="csi-node-driver-8ggwr" WorkloadEndpoint="172--234--28--21-k8s-csi--node--driver--8ggwr-" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.563 [INFO][4336] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" Namespace="calico-system" Pod="csi-node-driver-8ggwr" WorkloadEndpoint="172--234--28--21-k8s-csi--node--driver--8ggwr-eth0" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.606 [INFO][4382] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" HandleID="k8s-pod-network.106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" Workload="172--234--28--21-k8s-csi--node--driver--8ggwr-eth0" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.607 [INFO][4382] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" HandleID="k8s-pod-network.106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" Workload="172--234--28--21-k8s-csi--node--driver--8ggwr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-28-21", "pod":"csi-node-driver-8ggwr", "timestamp":"2025-12-12 18:14:04.606668403 +0000 UTC"}, Hostname:"172-234-28-21", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.607 [INFO][4382] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.607 [INFO][4382] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.607 [INFO][4382] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-28-21' Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.613 [INFO][4382] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" host="172-234-28-21" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.616 [INFO][4382] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-28-21" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.624 [INFO][4382] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.627 [INFO][4382] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.633 [INFO][4382] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.633 [INFO][4382] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" host="172-234-28-21" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.635 [INFO][4382] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.639 [INFO][4382] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" host="172-234-28-21" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.644 [INFO][4382] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.4/26] block=192.168.42.0/26 handle="k8s-pod-network.106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" host="172-234-28-21" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.644 [INFO][4382] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.4/26] handle="k8s-pod-network.106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" host="172-234-28-21" Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.644 [INFO][4382] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:14:04.672225 containerd[1629]: 2025-12-12 18:14:04.644 [INFO][4382] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.4/26] IPv6=[] ContainerID="106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" HandleID="k8s-pod-network.106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" Workload="172--234--28--21-k8s-csi--node--driver--8ggwr-eth0" Dec 12 18:14:04.673401 containerd[1629]: 2025-12-12 18:14:04.648 [INFO][4336] cni-plugin/k8s.go 418: Populated endpoint ContainerID="106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" Namespace="calico-system" Pod="csi-node-driver-8ggwr" WorkloadEndpoint="172--234--28--21-k8s-csi--node--driver--8ggwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-csi--node--driver--8ggwr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f46da395-0309-47b8-bfd7-ce69c3c79781", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"", Pod:"csi-node-driver-8ggwr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32f67b0d557", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:04.673401 containerd[1629]: 2025-12-12 18:14:04.648 [INFO][4336] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.4/32] ContainerID="106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" Namespace="calico-system" Pod="csi-node-driver-8ggwr" WorkloadEndpoint="172--234--28--21-k8s-csi--node--driver--8ggwr-eth0" Dec 12 18:14:04.673401 containerd[1629]: 2025-12-12 18:14:04.648 [INFO][4336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32f67b0d557 ContainerID="106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" Namespace="calico-system" Pod="csi-node-driver-8ggwr" WorkloadEndpoint="172--234--28--21-k8s-csi--node--driver--8ggwr-eth0" Dec 12 18:14:04.673401 containerd[1629]: 2025-12-12 18:14:04.657 [INFO][4336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" Namespace="calico-system" Pod="csi-node-driver-8ggwr" WorkloadEndpoint="172--234--28--21-k8s-csi--node--driver--8ggwr-eth0" Dec 12 18:14:04.673401 containerd[1629]: 2025-12-12 18:14:04.658 [INFO][4336] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" Namespace="calico-system" Pod="csi-node-driver-8ggwr" WorkloadEndpoint="172--234--28--21-k8s-csi--node--driver--8ggwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-csi--node--driver--8ggwr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f46da395-0309-47b8-bfd7-ce69c3c79781", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d", Pod:"csi-node-driver-8ggwr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32f67b0d557", MAC:"1e:12:c8:38:25:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:04.673401 containerd[1629]: 2025-12-12 18:14:04.667 [INFO][4336] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" Namespace="calico-system" Pod="csi-node-driver-8ggwr" WorkloadEndpoint="172--234--28--21-k8s-csi--node--driver--8ggwr-eth0" Dec 12 18:14:04.697687 containerd[1629]: time="2025-12-12T18:14:04.697372603Z" level=info msg="connecting to shim 106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d" address="unix:///run/containerd/s/a11504c735159e1bc6f45341c1e1f1a441fbf7030fccc3171dbd11dc56d0f9e2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:14:04.728468 systemd[1]: Started cri-containerd-106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d.scope - libcontainer container 106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d. Dec 12 18:14:04.754000 audit: BPF prog-id=207 op=LOAD Dec 12 18:14:04.754000 audit: BPF prog-id=208 op=LOAD Dec 12 18:14:04.754000 audit[4428]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4416 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130366534633830333565663365303035353632396235663831396366 Dec 12 18:14:04.754000 audit: BPF prog-id=208 op=UNLOAD Dec 12 18:14:04.754000 audit[4428]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4416 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130366534633830333565663365303035353632396235663831396366 Dec 12 18:14:04.755000 audit: BPF prog-id=209 op=LOAD Dec 12 18:14:04.755000 audit[4428]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4416 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130366534633830333565663365303035353632396235663831396366 Dec 12 18:14:04.755000 audit: BPF prog-id=210 op=LOAD Dec 12 18:14:04.755000 audit[4428]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4416 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130366534633830333565663365303035353632396235663831396366 Dec 12 18:14:04.755000 audit: BPF prog-id=210 op=UNLOAD Dec 12 18:14:04.755000 audit[4428]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4416 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130366534633830333565663365303035353632396235663831396366 Dec 12 18:14:04.755000 audit: BPF prog-id=209 op=UNLOAD Dec 12 18:14:04.755000 audit[4428]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4416 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130366534633830333565663365303035353632396235663831396366 Dec 12 18:14:04.755000 audit: BPF prog-id=211 op=LOAD Dec 12 18:14:04.755000 audit[4428]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4416 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130366534633830333565663365303035353632396235663831396366 Dec 12 18:14:04.769610 systemd-networkd[1530]: calie0bf1d796fa: Link UP Dec 12 18:14:04.774781 systemd-networkd[1530]: calie0bf1d796fa: Gained carrier Dec 12 18:14:04.790323 containerd[1629]: time="2025-12-12T18:14:04.789963573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8ggwr,Uid:f46da395-0309-47b8-bfd7-ce69c3c79781,Namespace:calico-system,Attempt:0,} returns sandbox id \"106e4c8035ef3e0055629b5f819cf09c7ca620d36538a9e8cd76e6f290963f2d\"" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.535 [INFO][4345] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.560 [INFO][4345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0 coredns-668d6bf9bc- kube-system 00c15d96-e0b6-4ce0-a16d-0db018401241 799 0 2025-12-12 18:13:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-28-21 coredns-668d6bf9bc-zljqj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0bf1d796fa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" Namespace="kube-system" Pod="coredns-668d6bf9bc-zljqj" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.560 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" Namespace="kube-system" Pod="coredns-668d6bf9bc-zljqj" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.617 [INFO][4380] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" HandleID="k8s-pod-network.963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" Workload="172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.617 [INFO][4380] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" HandleID="k8s-pod-network.963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" Workload="172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf700), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-28-21", "pod":"coredns-668d6bf9bc-zljqj", "timestamp":"2025-12-12 18:14:04.617542373 +0000 UTC"}, Hostname:"172-234-28-21", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.617 [INFO][4380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.644 [INFO][4380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.645 [INFO][4380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-28-21' Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.714 [INFO][4380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" host="172-234-28-21" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.720 [INFO][4380] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-28-21" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.726 [INFO][4380] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.729 [INFO][4380] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.731 [INFO][4380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.732 [INFO][4380] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" host="172-234-28-21" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.735 [INFO][4380] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.741 [INFO][4380] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" host="172-234-28-21" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.750 [INFO][4380] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.5/26] block=192.168.42.0/26 handle="k8s-pod-network.963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" host="172-234-28-21" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.750 [INFO][4380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.5/26] handle="k8s-pod-network.963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" host="172-234-28-21" Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.750 [INFO][4380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:14:04.795919 containerd[1629]: 2025-12-12 18:14:04.750 [INFO][4380] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.5/26] IPv6=[] ContainerID="963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" HandleID="k8s-pod-network.963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" Workload="172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0" Dec 12 18:14:04.796801 containerd[1629]: 2025-12-12 18:14:04.759 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" Namespace="kube-system" Pod="coredns-668d6bf9bc-zljqj" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"00c15d96-e0b6-4ce0-a16d-0db018401241", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"", Pod:"coredns-668d6bf9bc-zljqj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0bf1d796fa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:04.796801 containerd[1629]: 2025-12-12 18:14:04.759 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.5/32] ContainerID="963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" Namespace="kube-system" Pod="coredns-668d6bf9bc-zljqj" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0" Dec 12 18:14:04.796801 containerd[1629]: 2025-12-12 18:14:04.759 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0bf1d796fa ContainerID="963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" Namespace="kube-system" Pod="coredns-668d6bf9bc-zljqj" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0" Dec 12 18:14:04.796801 containerd[1629]: 2025-12-12 18:14:04.776 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" Namespace="kube-system" Pod="coredns-668d6bf9bc-zljqj" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0" Dec 12 18:14:04.796801 containerd[1629]: 2025-12-12 18:14:04.777 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" Namespace="kube-system" Pod="coredns-668d6bf9bc-zljqj" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"00c15d96-e0b6-4ce0-a16d-0db018401241", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de", Pod:"coredns-668d6bf9bc-zljqj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0bf1d796fa", MAC:"82:6a:ba:88:88:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:04.796801 containerd[1629]: 2025-12-12 18:14:04.787 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" Namespace="kube-system" Pod="coredns-668d6bf9bc-zljqj" WorkloadEndpoint="172--234--28--21-k8s-coredns--668d6bf9bc--zljqj-eth0" Dec 12 18:14:04.798844 containerd[1629]: time="2025-12-12T18:14:04.798288573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:14:04.837735 containerd[1629]: time="2025-12-12T18:14:04.837679343Z" level=info msg="connecting to shim 963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de" address="unix:///run/containerd/s/d9594749f0d46313e34364ca2361a62274e98465c355a8fc44a4b1f31f591c7d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:14:04.856255 systemd-networkd[1530]: cali324b0e8601c: Link UP Dec 12 18:14:04.856507 systemd-networkd[1530]: cali324b0e8601c: Gained carrier Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.548 [INFO][4335] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.573 [INFO][4335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0 goldmane-666569f655- calico-system c713cd34-08f7-480c-b91d-bedb3b68bb36 805 0 2025-12-12 18:13:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-234-28-21 goldmane-666569f655-86pl8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali324b0e8601c [] [] }} ContainerID="33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" Namespace="calico-system" Pod="goldmane-666569f655-86pl8" WorkloadEndpoint="172--234--28--21-k8s-goldmane--666569f655--86pl8-" Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.573 [INFO][4335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" Namespace="calico-system" Pod="goldmane-666569f655-86pl8" WorkloadEndpoint="172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0" Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.636 [INFO][4390] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" HandleID="k8s-pod-network.33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" Workload="172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0" Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.636 [INFO][4390] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" HandleID="k8s-pod-network.33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" Workload="172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5700), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-28-21", "pod":"goldmane-666569f655-86pl8", "timestamp":"2025-12-12 18:14:04.636731363 +0000 UTC"}, Hostname:"172-234-28-21", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.636 [INFO][4390] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.750 [INFO][4390] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.750 [INFO][4390] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-28-21' Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.815 [INFO][4390] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" host="172-234-28-21" Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.821 [INFO][4390] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-28-21" Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.825 [INFO][4390] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.828 [INFO][4390] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.831 [INFO][4390] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.832 [INFO][4390] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" host="172-234-28-21" Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.835 [INFO][4390] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.840 [INFO][4390] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" host="172-234-28-21" Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.846 [INFO][4390] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.6/26] block=192.168.42.0/26 handle="k8s-pod-network.33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" host="172-234-28-21" Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.846 [INFO][4390] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.6/26] handle="k8s-pod-network.33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" host="172-234-28-21" Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.846 [INFO][4390] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:14:04.879589 containerd[1629]: 2025-12-12 18:14:04.846 [INFO][4390] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.6/26] IPv6=[] ContainerID="33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" HandleID="k8s-pod-network.33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" Workload="172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0" Dec 12 18:14:04.880854 containerd[1629]: 2025-12-12 18:14:04.850 [INFO][4335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" Namespace="calico-system" Pod="goldmane-666569f655-86pl8" WorkloadEndpoint="172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c713cd34-08f7-480c-b91d-bedb3b68bb36", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"", Pod:"goldmane-666569f655-86pl8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali324b0e8601c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:04.880854 containerd[1629]: 2025-12-12 18:14:04.850 [INFO][4335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.6/32] ContainerID="33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" Namespace="calico-system" Pod="goldmane-666569f655-86pl8" WorkloadEndpoint="172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0" Dec 12 18:14:04.880854 containerd[1629]: 2025-12-12 18:14:04.850 [INFO][4335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali324b0e8601c ContainerID="33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" Namespace="calico-system" Pod="goldmane-666569f655-86pl8" WorkloadEndpoint="172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0" Dec 12 18:14:04.880854 containerd[1629]: 2025-12-12 18:14:04.856 [INFO][4335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" Namespace="calico-system" Pod="goldmane-666569f655-86pl8" WorkloadEndpoint="172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0" Dec 12 18:14:04.880854 containerd[1629]: 2025-12-12 18:14:04.857 [INFO][4335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" Namespace="calico-system" Pod="goldmane-666569f655-86pl8" WorkloadEndpoint="172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c713cd34-08f7-480c-b91d-bedb3b68bb36", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d", Pod:"goldmane-666569f655-86pl8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali324b0e8601c", MAC:"d2:62:a2:f1:9e:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:04.880854 containerd[1629]: 2025-12-12 18:14:04.874 [INFO][4335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" Namespace="calico-system" Pod="goldmane-666569f655-86pl8" WorkloadEndpoint="172--234--28--21-k8s-goldmane--666569f655--86pl8-eth0" Dec 12 18:14:04.890514 systemd[1]: Started cri-containerd-963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de.scope - libcontainer container 963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de. Dec 12 18:14:04.906000 audit: BPF prog-id=212 op=LOAD Dec 12 18:14:04.906000 audit: BPF prog-id=213 op=LOAD Dec 12 18:14:04.906000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174238 a2=98 a3=0 items=0 ppid=4474 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936336134636266623464356666613134633866386162633730323237 Dec 12 18:14:04.907000 audit: BPF prog-id=213 op=UNLOAD Dec 12 18:14:04.907000 audit[4487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4474 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936336134636266623464356666613134633866386162633730323237 Dec 12 18:14:04.907000 audit: BPF prog-id=214 op=LOAD Dec 12 18:14:04.907000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174488 a2=98 a3=0 items=0 ppid=4474 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936336134636266623464356666613134633866386162633730323237 Dec 12 18:14:04.908000 audit: BPF prog-id=215 op=LOAD Dec 12 18:14:04.908000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000174218 a2=98 a3=0 items=0 ppid=4474 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936336134636266623464356666613134633866386162633730323237 Dec 12 18:14:04.908000 audit: BPF prog-id=215 op=UNLOAD Dec 12 18:14:04.908000 audit[4487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4474 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936336134636266623464356666613134633866386162633730323237 Dec 12 18:14:04.908000 audit: BPF prog-id=214 op=UNLOAD Dec 12 18:14:04.908000 audit[4487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4474 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936336134636266623464356666613134633866386162633730323237 Dec 12 18:14:04.909000 audit: BPF prog-id=216 op=LOAD Dec 12 18:14:04.909000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001746e8 a2=98 a3=0 items=0 ppid=4474 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:04.909000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936336134636266623464356666613134633866386162633730323237 Dec 12 18:14:04.917931 containerd[1629]: time="2025-12-12T18:14:04.917889913Z" level=info msg="connecting to shim 33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d" address="unix:///run/containerd/s/60401a697ee9cf8b9e24335f65bb0d721259062a472190106759ac8e974efee2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:14:04.944247 containerd[1629]: time="2025-12-12T18:14:04.944212273Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:04.947498 containerd[1629]: time="2025-12-12T18:14:04.947464523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:14:04.947599 containerd[1629]: time="2025-12-12T18:14:04.947484993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:04.948037 kubelet[2802]: E1212 18:14:04.947899 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:14:04.948269 kubelet[2802]: E1212 18:14:04.948112 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:14:04.948810 kubelet[2802]: E1212 18:14:04.948743 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6hbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8ggwr_calico-system(f46da395-0309-47b8-bfd7-ce69c3c79781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:04.966885 containerd[1629]: time="2025-12-12T18:14:04.966859233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:14:04.969691 systemd[1]: Started cri-containerd-33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d.scope - libcontainer container 33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d. Dec 12 18:14:04.998543 containerd[1629]: time="2025-12-12T18:14:04.998189803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zljqj,Uid:00c15d96-e0b6-4ce0-a16d-0db018401241,Namespace:kube-system,Attempt:0,} returns sandbox id \"963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de\"" Dec 12 18:14:04.999000 audit: BPF prog-id=217 op=LOAD Dec 12 18:14:05.002151 kubelet[2802]: E1212 18:14:05.002028 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:05.000000 audit: BPF prog-id=218 op=LOAD Dec 12 18:14:05.000000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4528 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333646366666234653866303136346533343662336237373661343764 Dec 12 18:14:05.000000 audit: BPF prog-id=218 op=UNLOAD Dec 12 18:14:05.000000 audit[4541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4528 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333646366666234653866303136346533343662336237373661343764 Dec 12 18:14:05.000000 audit: BPF prog-id=219 op=LOAD Dec 12 18:14:05.000000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4528 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333646366666234653866303136346533343662336237373661343764 Dec 12 18:14:05.000000 audit: BPF prog-id=220 op=LOAD Dec 12 18:14:05.000000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4528 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333646366666234653866303136346533343662336237373661343764 Dec 12 18:14:05.000000 audit: BPF prog-id=220 op=UNLOAD Dec 12 18:14:05.000000 audit[4541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4528 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333646366666234653866303136346533343662336237373661343764 Dec 12 18:14:05.000000 audit: BPF prog-id=219 op=UNLOAD Dec 12 18:14:05.000000 audit[4541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4528 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333646366666234653866303136346533343662336237373661343764 Dec 12 18:14:05.001000 audit: BPF prog-id=221 op=LOAD Dec 12 18:14:05.001000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4528 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333646366666234653866303136346533343662336237373661343764 Dec 12 18:14:05.007307 containerd[1629]: time="2025-12-12T18:14:05.007207173Z" level=info msg="CreateContainer within sandbox \"963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:14:05.019503 containerd[1629]: time="2025-12-12T18:14:05.019450513Z" level=info msg="Container 6a8952cd5db752cc8dc90feed8d08b778664f65ccaa6b674169ec9e1225ddca4: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:14:05.025364 containerd[1629]: time="2025-12-12T18:14:05.025329473Z" level=info msg="CreateContainer within sandbox \"963a4cbfb4d5ffa14c8f8abc70227a0bae4eb795efb5b88ef483fd051aec09de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a8952cd5db752cc8dc90feed8d08b778664f65ccaa6b674169ec9e1225ddca4\"" Dec 12 18:14:05.027723 containerd[1629]: time="2025-12-12T18:14:05.026761323Z" level=info msg="StartContainer for \"6a8952cd5db752cc8dc90feed8d08b778664f65ccaa6b674169ec9e1225ddca4\"" Dec 12 18:14:05.028646 containerd[1629]: time="2025-12-12T18:14:05.028625613Z" level=info msg="connecting to shim 6a8952cd5db752cc8dc90feed8d08b778664f65ccaa6b674169ec9e1225ddca4" address="unix:///run/containerd/s/d9594749f0d46313e34364ca2361a62274e98465c355a8fc44a4b1f31f591c7d" protocol=ttrpc version=3 Dec 12 18:14:05.058655 systemd[1]: Started cri-containerd-6a8952cd5db752cc8dc90feed8d08b778664f65ccaa6b674169ec9e1225ddca4.scope - libcontainer container 6a8952cd5db752cc8dc90feed8d08b778664f65ccaa6b674169ec9e1225ddca4. Dec 12 18:14:05.075834 containerd[1629]: time="2025-12-12T18:14:05.075806463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-86pl8,Uid:c713cd34-08f7-480c-b91d-bedb3b68bb36,Namespace:calico-system,Attempt:0,} returns sandbox id \"33dcffb4e8f0164e346b3b776a47d0c62b9804362ffe751cb6e9abad3d17811d\"" Dec 12 18:14:05.076000 audit: BPF prog-id=222 op=LOAD Dec 12 18:14:05.076000 audit: BPF prog-id=223 op=LOAD Dec 12 18:14:05.076000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4474 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383935326364356462373532636338646339306665656438643038 Dec 12 18:14:05.076000 audit: BPF prog-id=223 op=UNLOAD Dec 12 18:14:05.076000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4474 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383935326364356462373532636338646339306665656438643038 Dec 12 18:14:05.077000 audit: BPF prog-id=224 op=LOAD Dec 12 18:14:05.077000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4474 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383935326364356462373532636338646339306665656438643038 Dec 12 18:14:05.077000 audit: BPF prog-id=225 op=LOAD Dec 12 18:14:05.077000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4474 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383935326364356462373532636338646339306665656438643038 Dec 12 18:14:05.077000 audit: BPF prog-id=225 op=UNLOAD Dec 12 18:14:05.077000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4474 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383935326364356462373532636338646339306665656438643038 Dec 12 18:14:05.077000 audit: BPF prog-id=224 op=UNLOAD Dec 12 18:14:05.077000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4474 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383935326364356462373532636338646339306665656438643038 Dec 12 18:14:05.077000 audit: BPF prog-id=226 op=LOAD Dec 12 18:14:05.077000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4474 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383935326364356462373532636338646339306665656438643038 Dec 12 18:14:05.100900 containerd[1629]: time="2025-12-12T18:14:05.100817523Z" level=info msg="StartContainer for \"6a8952cd5db752cc8dc90feed8d08b778664f65ccaa6b674169ec9e1225ddca4\" returns successfully" Dec 12 18:14:05.137657 containerd[1629]: time="2025-12-12T18:14:05.137556243Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:05.138659 containerd[1629]: time="2025-12-12T18:14:05.138634353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:14:05.138814 containerd[1629]: time="2025-12-12T18:14:05.138662393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:05.139007 kubelet[2802]: E1212 18:14:05.138970 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:14:05.139106 kubelet[2802]: E1212 18:14:05.139083 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:14:05.139404 kubelet[2802]: E1212 18:14:05.139344 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6hbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8ggwr_calico-system(f46da395-0309-47b8-bfd7-ce69c3c79781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:05.140204 containerd[1629]: time="2025-12-12T18:14:05.140149273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:14:05.141543 kubelet[2802]: E1212 18:14:05.141246 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:14:05.268105 containerd[1629]: time="2025-12-12T18:14:05.267964533Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:05.270367 containerd[1629]: time="2025-12-12T18:14:05.270182323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:14:05.270367 containerd[1629]: time="2025-12-12T18:14:05.270219143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:05.270793 kubelet[2802]: E1212 18:14:05.270719 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:14:05.270793 kubelet[2802]: E1212 18:14:05.270788 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:14:05.271373 kubelet[2802]: E1212 18:14:05.271286 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csz9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-86pl8_calico-system(c713cd34-08f7-480c-b91d-bedb3b68bb36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:05.272677 kubelet[2802]: E1212 18:14:05.272618 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:14:05.482498 containerd[1629]: time="2025-12-12T18:14:05.482438353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ccfb466b6-9s5wz,Uid:38a7f512-410f-47be-bd45-a402f5067f03,Namespace:calico-system,Attempt:0,}" Dec 12 18:14:05.600592 systemd-networkd[1530]: calicab520a5015: Link UP Dec 12 18:14:05.601397 systemd-networkd[1530]: calicab520a5015: Gained carrier Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.519 [INFO][4607] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.534 [INFO][4607] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0 calico-kube-controllers-6ccfb466b6- calico-system 38a7f512-410f-47be-bd45-a402f5067f03 807 0 2025-12-12 18:13:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6ccfb466b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-28-21 calico-kube-controllers-6ccfb466b6-9s5wz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicab520a5015 [] [] }} ContainerID="1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" Namespace="calico-system" Pod="calico-kube-controllers-6ccfb466b6-9s5wz" WorkloadEndpoint="172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-" Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.534 [INFO][4607] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" Namespace="calico-system" Pod="calico-kube-controllers-6ccfb466b6-9s5wz" WorkloadEndpoint="172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0" Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.564 [INFO][4620] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" HandleID="k8s-pod-network.1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" Workload="172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0" Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.564 [INFO][4620] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" HandleID="k8s-pod-network.1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" Workload="172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5a40), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-28-21", "pod":"calico-kube-controllers-6ccfb466b6-9s5wz", "timestamp":"2025-12-12 18:14:05.564473913 +0000 UTC"}, Hostname:"172-234-28-21", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.564 [INFO][4620] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.564 [INFO][4620] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.564 [INFO][4620] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-28-21' Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.570 [INFO][4620] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" host="172-234-28-21" Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.574 [INFO][4620] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-28-21" Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.578 [INFO][4620] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.580 [INFO][4620] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.581 [INFO][4620] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.581 [INFO][4620] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" host="172-234-28-21" Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.583 [INFO][4620] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37 Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.587 [INFO][4620] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" host="172-234-28-21" Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.593 [INFO][4620] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.7/26] block=192.168.42.0/26 handle="k8s-pod-network.1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" host="172-234-28-21" Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.593 [INFO][4620] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.7/26] handle="k8s-pod-network.1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" host="172-234-28-21" Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.593 [INFO][4620] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:14:05.618110 containerd[1629]: 2025-12-12 18:14:05.593 [INFO][4620] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.7/26] IPv6=[] ContainerID="1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" HandleID="k8s-pod-network.1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" Workload="172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0" Dec 12 18:14:05.618732 containerd[1629]: 2025-12-12 18:14:05.595 [INFO][4607] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" Namespace="calico-system" Pod="calico-kube-controllers-6ccfb466b6-9s5wz" WorkloadEndpoint="172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0", GenerateName:"calico-kube-controllers-6ccfb466b6-", Namespace:"calico-system", SelfLink:"", UID:"38a7f512-410f-47be-bd45-a402f5067f03", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ccfb466b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"", Pod:"calico-kube-controllers-6ccfb466b6-9s5wz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicab520a5015", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:05.618732 containerd[1629]: 2025-12-12 18:14:05.596 [INFO][4607] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.7/32] ContainerID="1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" Namespace="calico-system" Pod="calico-kube-controllers-6ccfb466b6-9s5wz" WorkloadEndpoint="172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0" Dec 12 18:14:05.618732 containerd[1629]: 2025-12-12 18:14:05.596 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicab520a5015 ContainerID="1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" Namespace="calico-system" Pod="calico-kube-controllers-6ccfb466b6-9s5wz" WorkloadEndpoint="172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0" Dec 12 18:14:05.618732 containerd[1629]: 2025-12-12 18:14:05.602 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" Namespace="calico-system" Pod="calico-kube-controllers-6ccfb466b6-9s5wz" WorkloadEndpoint="172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0" Dec 12 18:14:05.618732 containerd[1629]: 2025-12-12 18:14:05.602 [INFO][4607] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" Namespace="calico-system" Pod="calico-kube-controllers-6ccfb466b6-9s5wz" WorkloadEndpoint="172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0", GenerateName:"calico-kube-controllers-6ccfb466b6-", Namespace:"calico-system", SelfLink:"", UID:"38a7f512-410f-47be-bd45-a402f5067f03", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ccfb466b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37", Pod:"calico-kube-controllers-6ccfb466b6-9s5wz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicab520a5015", MAC:"12:68:5b:63:f1:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:05.618732 containerd[1629]: 2025-12-12 18:14:05.613 [INFO][4607] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" Namespace="calico-system" Pod="calico-kube-controllers-6ccfb466b6-9s5wz" WorkloadEndpoint="172--234--28--21-k8s-calico--kube--controllers--6ccfb466b6--9s5wz-eth0" Dec 12 18:14:05.641178 containerd[1629]: time="2025-12-12T18:14:05.641041973Z" level=info msg="connecting to shim 1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37" address="unix:///run/containerd/s/17a1cc858c5b16cb0b483b98250c92eeece5d97d37098003ee9ab0e3e85043dd" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:14:05.674555 systemd[1]: Started cri-containerd-1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37.scope - libcontainer container 1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37. Dec 12 18:14:05.676145 kubelet[2802]: E1212 18:14:05.675967 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:14:05.682869 kubelet[2802]: E1212 18:14:05.682835 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:14:05.685158 kubelet[2802]: E1212 18:14:05.684915 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:05.685621 kubelet[2802]: E1212 18:14:05.685273 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:05.714000 audit[4672]: NETFILTER_CFG table=filter:123 family=2 entries=16 op=nft_register_rule pid=4672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:05.714000 audit[4672]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe2b31ad60 a2=0 a3=7ffe2b31ad4c items=0 ppid=2909 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:05.720928 kubelet[2802]: I1212 18:14:05.720673 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zljqj" podStartSLOduration=35.720656993 podStartE2EDuration="35.720656993s" podCreationTimestamp="2025-12-12 18:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:14:05.708054743 +0000 UTC m=+40.345073051" watchObservedRunningTime="2025-12-12 18:14:05.720656993 +0000 UTC m=+40.357675301" Dec 12 18:14:05.721000 audit[4672]: NETFILTER_CFG table=nat:124 family=2 entries=18 op=nft_register_rule pid=4672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:05.721000 audit[4672]: SYSCALL arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7ffe2b31ad60 a2=0 a3=0 items=0 ppid=2909 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:05.753000 audit[4680]: NETFILTER_CFG table=filter:125 family=2 entries=16 op=nft_register_rule pid=4680 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:05.753000 audit[4680]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc86b70c20 a2=0 a3=7ffc86b70c0c items=0 ppid=2909 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:05.758000 audit: BPF prog-id=227 op=LOAD Dec 12 18:14:05.758000 audit: BPF prog-id=228 op=LOAD Dec 12 18:14:05.758000 audit[4652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4640 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163626362363762333165643635323332666563623033316161343132 Dec 12 18:14:05.758000 audit: BPF prog-id=228 op=UNLOAD Dec 12 18:14:05.758000 audit[4652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4640 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163626362363762333165643635323332666563623033316161343132 Dec 12 18:14:05.758000 audit: BPF prog-id=229 op=LOAD Dec 12 18:14:05.758000 audit[4652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4640 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163626362363762333165643635323332666563623033316161343132 Dec 12 18:14:05.758000 audit: BPF prog-id=230 op=LOAD Dec 12 18:14:05.758000 audit[4652]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4640 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163626362363762333165643635323332666563623033316161343132 Dec 12 18:14:05.759000 audit: BPF prog-id=230 op=UNLOAD Dec 12 18:14:05.759000 audit[4652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4640 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163626362363762333165643635323332666563623033316161343132 Dec 12 18:14:05.759000 audit: BPF prog-id=229 op=UNLOAD Dec 12 18:14:05.759000 audit[4652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4640 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163626362363762333165643635323332666563623033316161343132 Dec 12 18:14:05.759000 audit: BPF prog-id=231 op=LOAD Dec 12 18:14:05.759000 audit[4652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4640 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163626362363762333165643635323332666563623033316161343132 Dec 12 18:14:05.769000 audit[4680]: NETFILTER_CFG table=nat:126 family=2 entries=54 op=nft_register_chain pid=4680 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:05.769000 audit[4680]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7ffc86b70c20 a2=0 a3=7ffc86b70c0c items=0 ppid=2909 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:05.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:05.821049 containerd[1629]: time="2025-12-12T18:14:05.821012033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ccfb466b6-9s5wz,Uid:38a7f512-410f-47be-bd45-a402f5067f03,Namespace:calico-system,Attempt:0,} returns sandbox id \"1cbcb67b31ed65232fecb031aa412128611c0ba74060deea9da3e9a33232ee37\"" Dec 12 18:14:05.824519 containerd[1629]: time="2025-12-12T18:14:05.823662873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:14:05.950255 containerd[1629]: time="2025-12-12T18:14:05.949700723Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:05.950745 containerd[1629]: time="2025-12-12T18:14:05.950675023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:14:05.950817 containerd[1629]: time="2025-12-12T18:14:05.950776353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:05.951954 kubelet[2802]: E1212 18:14:05.951888 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:14:05.951954 kubelet[2802]: E1212 18:14:05.951935 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:14:05.952390 kubelet[2802]: E1212 18:14:05.952179 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9kts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6ccfb466b6-9s5wz_calico-system(38a7f512-410f-47be-bd45-a402f5067f03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:05.954286 kubelet[2802]: E1212 18:14:05.954243 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:14:06.044596 systemd-networkd[1530]: cali324b0e8601c: Gained IPv6LL Dec 12 18:14:06.479237 containerd[1629]: time="2025-12-12T18:14:06.479184343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbcf5f67f-bmmdp,Uid:415d4ab0-257f-4751-838c-4b86e1cd5e79,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:14:06.556758 systemd-networkd[1530]: cali32f67b0d557: Gained IPv6LL Dec 12 18:14:06.579689 systemd-networkd[1530]: cali46c683a133c: Link UP Dec 12 18:14:06.580533 systemd-networkd[1530]: cali46c683a133c: Gained carrier Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.502 [INFO][4711] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.510 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0 calico-apiserver-6cbcf5f67f- calico-apiserver 415d4ab0-257f-4751-838c-4b86e1cd5e79 809 0 2025-12-12 18:13:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cbcf5f67f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-28-21 calico-apiserver-6cbcf5f67f-bmmdp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali46c683a133c [] [] }} ContainerID="066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-bmmdp" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-" Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.510 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-bmmdp" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0" Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.539 [INFO][4720] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" HandleID="k8s-pod-network.066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" Workload="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0" Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.539 [INFO][4720] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" HandleID="k8s-pod-network.066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" Workload="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-28-21", "pod":"calico-apiserver-6cbcf5f67f-bmmdp", "timestamp":"2025-12-12 18:14:06.539175123 +0000 UTC"}, Hostname:"172-234-28-21", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.539 [INFO][4720] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.539 [INFO][4720] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.539 [INFO][4720] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-28-21' Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.545 [INFO][4720] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" host="172-234-28-21" Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.550 [INFO][4720] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-28-21" Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.556 [INFO][4720] ipam/ipam.go 511: Trying affinity for 192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.558 [INFO][4720] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.561 [INFO][4720] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.0/26 host="172-234-28-21" Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.561 [INFO][4720] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.0/26 handle="k8s-pod-network.066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" host="172-234-28-21" Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.562 [INFO][4720] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3 Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.566 [INFO][4720] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.0/26 handle="k8s-pod-network.066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" host="172-234-28-21" Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.573 [INFO][4720] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.8/26] block=192.168.42.0/26 handle="k8s-pod-network.066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" host="172-234-28-21" Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.573 [INFO][4720] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.8/26] handle="k8s-pod-network.066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" host="172-234-28-21" Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.573 [INFO][4720] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:14:06.595939 containerd[1629]: 2025-12-12 18:14:06.574 [INFO][4720] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.8/26] IPv6=[] ContainerID="066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" HandleID="k8s-pod-network.066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" Workload="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0" Dec 12 18:14:06.596705 containerd[1629]: 2025-12-12 18:14:06.576 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-bmmdp" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0", GenerateName:"calico-apiserver-6cbcf5f67f-", Namespace:"calico-apiserver", SelfLink:"", UID:"415d4ab0-257f-4751-838c-4b86e1cd5e79", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbcf5f67f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"", Pod:"calico-apiserver-6cbcf5f67f-bmmdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46c683a133c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:06.596705 containerd[1629]: 2025-12-12 18:14:06.576 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.8/32] ContainerID="066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-bmmdp" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0" Dec 12 18:14:06.596705 containerd[1629]: 2025-12-12 18:14:06.576 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46c683a133c ContainerID="066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-bmmdp" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0" Dec 12 18:14:06.596705 containerd[1629]: 2025-12-12 18:14:06.581 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-bmmdp" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0" Dec 12 18:14:06.596705 containerd[1629]: 2025-12-12 18:14:06.581 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-bmmdp" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0", GenerateName:"calico-apiserver-6cbcf5f67f-", Namespace:"calico-apiserver", SelfLink:"", UID:"415d4ab0-257f-4751-838c-4b86e1cd5e79", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbcf5f67f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-28-21", ContainerID:"066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3", Pod:"calico-apiserver-6cbcf5f67f-bmmdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46c683a133c", MAC:"56:c9:9e:9e:ef:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:14:06.596705 containerd[1629]: 2025-12-12 18:14:06.590 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" Namespace="calico-apiserver" Pod="calico-apiserver-6cbcf5f67f-bmmdp" WorkloadEndpoint="172--234--28--21-k8s-calico--apiserver--6cbcf5f67f--bmmdp-eth0" Dec 12 18:14:06.631538 containerd[1629]: time="2025-12-12T18:14:06.631465913Z" level=info msg="connecting to shim 066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3" address="unix:///run/containerd/s/a15f7f495107bac0184a6a7a4354a2b95db69b8acb47e3f6cee2073ecd1281f5" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:14:06.672616 systemd[1]: Started cri-containerd-066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3.scope - libcontainer container 066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3. Dec 12 18:14:06.684466 systemd-networkd[1530]: calie0bf1d796fa: Gained IPv6LL Dec 12 18:14:06.688941 kubelet[2802]: E1212 18:14:06.688878 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:06.691673 kubelet[2802]: E1212 18:14:06.691638 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:14:06.691751 kubelet[2802]: E1212 18:14:06.691733 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:14:06.695635 kubelet[2802]: E1212 18:14:06.694463 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:14:06.701000 audit: BPF prog-id=232 op=LOAD Dec 12 18:14:06.701000 audit: BPF prog-id=233 op=LOAD Dec 12 18:14:06.701000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4741 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:06.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036366262306638613665626237643538663362346133393961383333 Dec 12 18:14:06.701000 audit: BPF prog-id=233 op=UNLOAD Dec 12 18:14:06.701000 audit[4752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4741 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:06.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036366262306638613665626237643538663362346133393961383333 Dec 12 18:14:06.702000 audit: BPF prog-id=234 op=LOAD Dec 12 18:14:06.702000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4741 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:06.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036366262306638613665626237643538663362346133393961383333 Dec 12 18:14:06.702000 audit: BPF prog-id=235 op=LOAD Dec 12 18:14:06.702000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4741 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:06.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036366262306638613665626237643538663362346133393961383333 Dec 12 18:14:06.702000 audit: BPF prog-id=235 op=UNLOAD Dec 12 18:14:06.702000 audit[4752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4741 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:06.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036366262306638613665626237643538663362346133393961383333 Dec 12 18:14:06.702000 audit: BPF prog-id=234 op=UNLOAD Dec 12 18:14:06.702000 audit[4752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4741 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:06.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036366262306638613665626237643538663362346133393961383333 Dec 12 18:14:06.702000 audit: BPF prog-id=236 op=LOAD Dec 12 18:14:06.702000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4741 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:06.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036366262306638613665626237643538663362346133393961383333 Dec 12 18:14:06.801950 containerd[1629]: time="2025-12-12T18:14:06.801886513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbcf5f67f-bmmdp,Uid:415d4ab0-257f-4751-838c-4b86e1cd5e79,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"066bb0f8a6ebb7d58f3b4a399a833a2c8ec09e50753f3a93c123a800d04458c3\"" Dec 12 18:14:06.805630 containerd[1629]: time="2025-12-12T18:14:06.805410743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:14:06.933240 containerd[1629]: time="2025-12-12T18:14:06.933202423Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:06.934362 containerd[1629]: time="2025-12-12T18:14:06.934336953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:06.934536 containerd[1629]: time="2025-12-12T18:14:06.934429673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:14:06.934743 kubelet[2802]: E1212 18:14:06.934681 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:14:06.934888 kubelet[2802]: E1212 18:14:06.934866 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:14:06.935607 kubelet[2802]: E1212 18:14:06.935555 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh5cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cbcf5f67f-bmmdp_calico-apiserver(415d4ab0-257f-4751-838c-4b86e1cd5e79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:06.936882 kubelet[2802]: E1212 18:14:06.936837 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:14:07.392222 kubelet[2802]: I1212 18:14:07.391817 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:14:07.393322 kubelet[2802]: E1212 18:14:07.393272 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:07.516510 systemd-networkd[1530]: calicab520a5015: Gained IPv6LL Dec 12 18:14:07.693458 kubelet[2802]: E1212 18:14:07.692957 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:14:07.693458 kubelet[2802]: E1212 18:14:07.693404 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:07.696725 kubelet[2802]: E1212 18:14:07.693652 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:14:07.730000 audit[4848]: NETFILTER_CFG table=filter:127 family=2 entries=16 op=nft_register_rule pid=4848 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:07.730000 audit[4848]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffed7cb0330 a2=0 a3=7ffed7cb031c items=0 ppid=2909 pid=4848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:07.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:07.733000 audit[4848]: NETFILTER_CFG table=nat:128 family=2 entries=18 op=nft_register_rule pid=4848 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:07.733000 audit[4848]: SYSCALL arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7ffed7cb0330 a2=0 a3=0 items=0 ppid=2909 pid=4848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:07.733000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:08.540466 systemd-networkd[1530]: cali46c683a133c: Gained IPv6LL Dec 12 18:14:08.694545 kubelet[2802]: E1212 18:14:08.694382 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:14:10.481376 containerd[1629]: time="2025-12-12T18:14:10.481257921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:14:10.605779 containerd[1629]: time="2025-12-12T18:14:10.605701200Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:10.607452 containerd[1629]: time="2025-12-12T18:14:10.607319200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:10.607452 containerd[1629]: time="2025-12-12T18:14:10.607338091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:14:10.607675 kubelet[2802]: E1212 18:14:10.607600 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:14:10.607675 kubelet[2802]: E1212 18:14:10.607672 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:14:10.608400 kubelet[2802]: E1212 18:14:10.608363 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3f4f98888c449589a809d6f0e403cbb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rdd5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d7c5d55f9-7mw7q_calico-system(f5d7327f-3d2b-4ade-8746-8210d015da61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:10.611024 containerd[1629]: time="2025-12-12T18:14:10.610983978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:14:10.739882 containerd[1629]: time="2025-12-12T18:14:10.739712248Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:10.741431 containerd[1629]: time="2025-12-12T18:14:10.741362579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:14:10.741612 containerd[1629]: time="2025-12-12T18:14:10.741396011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:10.742174 kubelet[2802]: E1212 18:14:10.741758 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:14:10.742174 kubelet[2802]: E1212 18:14:10.741825 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:14:10.742174 kubelet[2802]: E1212 18:14:10.741972 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdd5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d7c5d55f9-7mw7q_calico-system(f5d7327f-3d2b-4ade-8746-8210d015da61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:10.743999 kubelet[2802]: E1212 18:14:10.743940 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d7c5d55f9-7mw7q" podUID="f5d7327f-3d2b-4ade-8746-8210d015da61" Dec 12 18:14:12.706274 kubelet[2802]: I1212 18:14:12.706201 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:14:12.708124 kubelet[2802]: E1212 18:14:12.706796 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:12.735000 audit[4957]: NETFILTER_CFG table=filter:129 family=2 entries=15 op=nft_register_rule pid=4957 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:12.738037 kernel: kauditd_printk_skb: 218 callbacks suppressed Dec 12 18:14:12.738121 kernel: audit: type=1325 audit(1765563252.735:684): table=filter:129 family=2 entries=15 op=nft_register_rule pid=4957 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:12.743372 kernel: audit: type=1300 audit(1765563252.735:684): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffe76f74b0 a2=0 a3=7fffe76f749c items=0 ppid=2909 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:12.735000 audit[4957]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffe76f74b0 a2=0 a3=7fffe76f749c items=0 ppid=2909 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:12.753359 kernel: audit: type=1327 audit(1765563252.735:684): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:12.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:12.751000 audit[4957]: NETFILTER_CFG table=nat:130 family=2 entries=25 op=nft_register_chain pid=4957 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:12.759325 kernel: audit: type=1325 audit(1765563252.751:685): table=nat:130 family=2 entries=25 op=nft_register_chain pid=4957 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:14:12.751000 audit[4957]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7fffe76f74b0 a2=0 a3=7fffe76f749c items=0 ppid=2909 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:12.764844 kernel: audit: type=1300 audit(1765563252.751:685): arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7fffe76f74b0 a2=0 a3=7fffe76f749c items=0 ppid=2909 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:12.772373 kernel: audit: type=1327 audit(1765563252.751:685): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:12.751000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:14:13.548000 audit: BPF prog-id=237 op=LOAD Dec 12 18:14:13.551857 kernel: audit: type=1334 audit(1765563253.548:686): prog-id=237 op=LOAD Dec 12 18:14:13.548000 audit[5011]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff1ab4bd0 a2=98 a3=1fffffffffffffff items=0 ppid=4988 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.562314 kernel: audit: type=1300 audit(1765563253.548:686): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff1ab4bd0 a2=98 a3=1fffffffffffffff items=0 ppid=4988 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.574342 kernel: audit: type=1327 audit(1765563253.548:686): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:14:13.548000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:14:13.548000 audit: BPF prog-id=237 op=UNLOAD Dec 12 18:14:13.548000 audit[5011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff1ab4ba0 a3=0 items=0 ppid=4988 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.548000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:14:13.548000 audit: BPF prog-id=238 op=LOAD Dec 12 18:14:13.548000 audit[5011]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff1ab4ab0 a2=94 a3=3 items=0 ppid=4988 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.548000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:14:13.548000 audit: BPF prog-id=238 op=UNLOAD Dec 12 18:14:13.548000 audit[5011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff1ab4ab0 a2=94 a3=3 items=0 ppid=4988 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.577381 kernel: audit: type=1334 audit(1765563253.548:687): prog-id=237 op=UNLOAD Dec 12 18:14:13.548000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:14:13.548000 audit: BPF prog-id=239 op=LOAD Dec 12 18:14:13.548000 audit[5011]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff1ab4af0 a2=94 a3=7ffff1ab4cd0 items=0 ppid=4988 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.548000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:14:13.548000 audit: BPF prog-id=239 op=UNLOAD Dec 12 18:14:13.548000 audit[5011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff1ab4af0 a2=94 a3=7ffff1ab4cd0 items=0 ppid=4988 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.548000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 18:14:13.548000 audit: BPF prog-id=240 op=LOAD Dec 12 18:14:13.548000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe58b1da40 a2=98 a3=3 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.548000 audit: BPF prog-id=240 op=UNLOAD Dec 12 18:14:13.548000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe58b1da10 a3=0 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.548000 audit: BPF prog-id=241 op=LOAD Dec 12 18:14:13.548000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe58b1d830 a2=94 a3=54428f items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.548000 audit: BPF prog-id=241 op=UNLOAD Dec 12 18:14:13.548000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe58b1d830 a2=94 a3=54428f items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.548000 audit: BPF prog-id=242 op=LOAD Dec 12 18:14:13.548000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe58b1d860 a2=94 a3=2 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.548000 audit: BPF prog-id=242 op=UNLOAD Dec 12 18:14:13.548000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe58b1d860 a2=0 a3=2 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.548000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.705019 kubelet[2802]: E1212 18:14:13.704216 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:13.771000 audit: BPF prog-id=243 op=LOAD Dec 12 18:14:13.771000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe58b1d720 a2=94 a3=1 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.771000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.772000 audit: BPF prog-id=243 op=UNLOAD Dec 12 18:14:13.772000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe58b1d720 a2=94 a3=1 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.772000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.781000 audit: BPF prog-id=244 op=LOAD Dec 12 18:14:13.781000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe58b1d710 a2=94 a3=4 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.781000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.782000 audit: BPF prog-id=244 op=UNLOAD Dec 12 18:14:13.782000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe58b1d710 a2=0 a3=4 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.782000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.782000 audit: BPF prog-id=245 op=LOAD Dec 12 18:14:13.782000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe58b1d570 a2=94 a3=5 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.782000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.782000 audit: BPF prog-id=245 op=UNLOAD Dec 12 18:14:13.782000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe58b1d570 a2=0 a3=5 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.782000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.782000 audit: BPF prog-id=246 op=LOAD Dec 12 18:14:13.782000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe58b1d790 a2=94 a3=6 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.782000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.782000 audit: BPF prog-id=246 op=UNLOAD Dec 12 18:14:13.782000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe58b1d790 a2=0 a3=6 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.782000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.782000 audit: BPF prog-id=247 op=LOAD Dec 12 18:14:13.782000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe58b1cf40 a2=94 a3=88 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.782000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.783000 audit: BPF prog-id=248 op=LOAD Dec 12 18:14:13.783000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe58b1cdc0 a2=94 a3=2 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.783000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.783000 audit: BPF prog-id=248 op=UNLOAD Dec 12 18:14:13.783000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe58b1cdf0 a2=0 a3=7ffe58b1cef0 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.783000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.783000 audit: BPF prog-id=247 op=UNLOAD Dec 12 18:14:13.783000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=245a5d10 a2=0 a3=71d14f19e637f008 items=0 ppid=4988 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.783000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 18:14:13.793000 audit: BPF prog-id=249 op=LOAD Dec 12 18:14:13.793000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd7c6a750 a2=98 a3=1999999999999999 items=0 ppid=4988 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.793000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:14:13.793000 audit: BPF prog-id=249 op=UNLOAD Dec 12 18:14:13.793000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdd7c6a720 a3=0 items=0 ppid=4988 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.793000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:14:13.793000 audit: BPF prog-id=250 op=LOAD Dec 12 18:14:13.793000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd7c6a630 a2=94 a3=ffff items=0 ppid=4988 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.793000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:14:13.793000 audit: BPF prog-id=250 op=UNLOAD Dec 12 18:14:13.793000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdd7c6a630 a2=94 a3=ffff items=0 ppid=4988 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.793000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:14:13.793000 audit: BPF prog-id=251 op=LOAD Dec 12 18:14:13.793000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd7c6a670 a2=94 a3=7ffdd7c6a850 items=0 ppid=4988 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.793000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:14:13.793000 audit: BPF prog-id=251 op=UNLOAD Dec 12 18:14:13.793000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdd7c6a670 a2=94 a3=7ffdd7c6a850 items=0 ppid=4988 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.793000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 18:14:13.867662 systemd-networkd[1530]: vxlan.calico: Link UP Dec 12 18:14:13.867672 systemd-networkd[1530]: vxlan.calico: Gained carrier Dec 12 18:14:13.899000 audit: BPF prog-id=252 op=LOAD Dec 12 18:14:13.899000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd40aa2b30 a2=98 a3=0 items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.899000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.899000 audit: BPF prog-id=252 op=UNLOAD Dec 12 18:14:13.899000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd40aa2b00 a3=0 items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.899000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.900000 audit: BPF prog-id=253 op=LOAD Dec 12 18:14:13.900000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd40aa2940 a2=94 a3=54428f items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.900000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.900000 audit: BPF prog-id=253 op=UNLOAD Dec 12 18:14:13.900000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd40aa2940 a2=94 a3=54428f items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.900000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.900000 audit: BPF prog-id=254 op=LOAD Dec 12 18:14:13.900000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd40aa2970 a2=94 a3=2 items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.900000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.901000 audit: BPF prog-id=254 op=UNLOAD Dec 12 18:14:13.901000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd40aa2970 a2=0 a3=2 items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.901000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.901000 audit: BPF prog-id=255 op=LOAD Dec 12 18:14:13.901000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd40aa2720 a2=94 a3=4 items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.901000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.903000 audit: BPF prog-id=255 op=UNLOAD Dec 12 18:14:13.903000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd40aa2720 a2=94 a3=4 items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.903000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.903000 audit: BPF prog-id=256 op=LOAD Dec 12 18:14:13.903000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd40aa2820 a2=94 a3=7ffd40aa29a0 items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.903000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.903000 audit: BPF prog-id=256 op=UNLOAD Dec 12 18:14:13.903000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd40aa2820 a2=0 a3=7ffd40aa29a0 items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.903000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.909000 audit: BPF prog-id=257 op=LOAD Dec 12 18:14:13.909000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd40aa1f50 a2=94 a3=2 items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.909000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.909000 audit: BPF prog-id=257 op=UNLOAD Dec 12 18:14:13.909000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd40aa1f50 a2=0 a3=2 items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.909000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.909000 audit: BPF prog-id=258 op=LOAD Dec 12 18:14:13.909000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd40aa2050 a2=94 a3=30 items=0 ppid=4988 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.909000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 18:14:13.923000 audit: BPF prog-id=259 op=LOAD Dec 12 18:14:13.923000 audit[5054]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff4707e040 a2=98 a3=0 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.923000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:13.924000 audit: BPF prog-id=259 op=UNLOAD Dec 12 18:14:13.924000 audit[5054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff4707e010 a3=0 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.924000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:13.924000 audit: BPF prog-id=260 op=LOAD Dec 12 18:14:13.924000 audit[5054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff4707de30 a2=94 a3=54428f items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.924000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:13.925000 audit: BPF prog-id=260 op=UNLOAD Dec 12 18:14:13.925000 audit[5054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff4707de30 a2=94 a3=54428f items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.925000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:13.925000 audit: BPF prog-id=261 op=LOAD Dec 12 18:14:13.925000 audit[5054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff4707de60 a2=94 a3=2 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.925000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:13.925000 audit: BPF prog-id=261 op=UNLOAD Dec 12 18:14:13.925000 audit[5054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff4707de60 a2=0 a3=2 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:13.925000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.108000 audit: BPF prog-id=262 op=LOAD Dec 12 18:14:14.108000 audit[5054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff4707dd20 a2=94 a3=1 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.108000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.108000 audit: BPF prog-id=262 op=UNLOAD Dec 12 18:14:14.108000 audit[5054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff4707dd20 a2=94 a3=1 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.108000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.118000 audit: BPF prog-id=263 op=LOAD Dec 12 18:14:14.118000 audit[5054]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff4707dd10 a2=94 a3=4 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.118000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.119000 audit: BPF prog-id=263 op=UNLOAD Dec 12 18:14:14.119000 audit[5054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff4707dd10 a2=0 a3=4 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.119000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.119000 audit: BPF prog-id=264 op=LOAD Dec 12 18:14:14.119000 audit[5054]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff4707db70 a2=94 a3=5 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.119000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.120000 audit: BPF prog-id=264 op=UNLOAD Dec 12 18:14:14.120000 audit[5054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff4707db70 a2=0 a3=5 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.120000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.120000 audit: BPF prog-id=265 op=LOAD Dec 12 18:14:14.120000 audit[5054]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff4707dd90 a2=94 a3=6 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.120000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.120000 audit: BPF prog-id=265 op=UNLOAD Dec 12 18:14:14.120000 audit[5054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff4707dd90 a2=0 a3=6 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.120000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.120000 audit: BPF prog-id=266 op=LOAD Dec 12 18:14:14.120000 audit[5054]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff4707d540 a2=94 a3=88 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.120000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.121000 audit: BPF prog-id=267 op=LOAD Dec 12 18:14:14.121000 audit[5054]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff4707d3c0 a2=94 a3=2 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.121000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.121000 audit: BPF prog-id=267 op=UNLOAD Dec 12 18:14:14.121000 audit[5054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff4707d3f0 a2=0 a3=7fff4707d4f0 items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.121000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.121000 audit: BPF prog-id=266 op=UNLOAD Dec 12 18:14:14.121000 audit[5054]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7986d10 a2=0 a3=d5fc1aeb5559457c items=0 ppid=4988 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.121000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 18:14:14.131000 audit: BPF prog-id=258 op=UNLOAD Dec 12 18:14:14.131000 audit[4988]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0007a40c0 a2=0 a3=0 items=0 ppid=3841 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.131000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 18:14:14.219000 audit[5084]: NETFILTER_CFG table=nat:131 family=2 entries=15 op=nft_register_chain pid=5084 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:14:14.219000 audit[5084]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe2be8d250 a2=0 a3=7ffe2be8d23c items=0 ppid=4988 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.219000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:14:14.225000 audit[5088]: NETFILTER_CFG table=mangle:132 family=2 entries=16 op=nft_register_chain pid=5088 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:14:14.225000 audit[5088]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd60da7d10 a2=0 a3=7ffd60da7cfc items=0 ppid=4988 pid=5088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.225000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:14:14.232000 audit[5085]: NETFILTER_CFG table=raw:133 family=2 entries=21 op=nft_register_chain pid=5085 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:14:14.232000 audit[5085]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff2b995c60 a2=0 a3=7fff2b995c4c items=0 ppid=4988 pid=5085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.232000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:14:14.240000 audit[5090]: NETFILTER_CFG table=filter:134 family=2 entries=327 op=nft_register_chain pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 18:14:14.240000 audit[5090]: SYSCALL arch=c000003e syscall=46 success=yes exit=193468 a0=3 a1=7ffe824113b0 a2=0 a3=7ffe8241139c items=0 ppid=4988 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:14:14.240000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 18:14:15.452460 systemd-networkd[1530]: vxlan.calico: Gained IPv6LL Dec 12 18:14:17.481636 containerd[1629]: time="2025-12-12T18:14:17.481257679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:14:17.679888 containerd[1629]: time="2025-12-12T18:14:17.679822420Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:17.680938 containerd[1629]: time="2025-12-12T18:14:17.680906786Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:14:17.681070 containerd[1629]: time="2025-12-12T18:14:17.681000709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:17.681332 kubelet[2802]: E1212 18:14:17.681192 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:14:17.681954 kubelet[2802]: E1212 18:14:17.681912 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:14:17.682802 kubelet[2802]: E1212 18:14:17.682736 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxlxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cbcf5f67f-dvvv5_calico-apiserver(b35a9cda-d256-490b-8223-d4936abd6ff5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:17.684034 kubelet[2802]: E1212 18:14:17.683917 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:14:18.480765 containerd[1629]: time="2025-12-12T18:14:18.480672748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:14:18.612632 containerd[1629]: time="2025-12-12T18:14:18.612165623Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:18.614025 containerd[1629]: time="2025-12-12T18:14:18.613907002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:14:18.614357 containerd[1629]: time="2025-12-12T18:14:18.613969613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:18.614460 kubelet[2802]: E1212 18:14:18.614352 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:14:18.614460 kubelet[2802]: E1212 18:14:18.614430 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:14:18.614795 kubelet[2802]: E1212 18:14:18.614671 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csz9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-86pl8_calico-system(c713cd34-08f7-480c-b91d-bedb3b68bb36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:18.615926 kubelet[2802]: E1212 18:14:18.615888 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:14:19.480484 containerd[1629]: time="2025-12-12T18:14:19.480382050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:14:19.608954 containerd[1629]: time="2025-12-12T18:14:19.608884727Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:19.610888 containerd[1629]: time="2025-12-12T18:14:19.610729285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:14:19.610888 containerd[1629]: time="2025-12-12T18:14:19.610798287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:19.611309 kubelet[2802]: E1212 18:14:19.611251 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:14:19.611694 kubelet[2802]: E1212 18:14:19.611323 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:14:19.611694 kubelet[2802]: E1212 18:14:19.611438 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9kts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6ccfb466b6-9s5wz_calico-system(38a7f512-410f-47be-bd45-a402f5067f03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:19.613681 kubelet[2802]: E1212 18:14:19.613625 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:14:20.480078 containerd[1629]: time="2025-12-12T18:14:20.479992821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:14:20.613134 containerd[1629]: time="2025-12-12T18:14:20.613082941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:20.614083 containerd[1629]: time="2025-12-12T18:14:20.614036859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:14:20.614083 containerd[1629]: time="2025-12-12T18:14:20.614062650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:20.614211 kubelet[2802]: E1212 18:14:20.614182 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:14:20.614557 kubelet[2802]: E1212 18:14:20.614243 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:14:20.614557 kubelet[2802]: E1212 18:14:20.614373 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh5cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cbcf5f67f-bmmdp_calico-apiserver(415d4ab0-257f-4751-838c-4b86e1cd5e79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:20.615792 kubelet[2802]: E1212 18:14:20.615767 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:14:21.480158 containerd[1629]: time="2025-12-12T18:14:21.480104031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:14:21.608493 containerd[1629]: time="2025-12-12T18:14:21.608447769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:21.609688 containerd[1629]: time="2025-12-12T18:14:21.609618391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:14:21.609688 containerd[1629]: time="2025-12-12T18:14:21.609718292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:21.610078 kubelet[2802]: E1212 18:14:21.610029 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:14:21.610171 kubelet[2802]: E1212 18:14:21.610085 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:14:21.610273 kubelet[2802]: E1212 18:14:21.610193 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6hbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8ggwr_calico-system(f46da395-0309-47b8-bfd7-ce69c3c79781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:21.612196 containerd[1629]: time="2025-12-12T18:14:21.612165728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:14:21.762694 containerd[1629]: time="2025-12-12T18:14:21.762327948Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:21.764064 containerd[1629]: time="2025-12-12T18:14:21.763997028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:14:21.764144 containerd[1629]: time="2025-12-12T18:14:21.764107811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:21.764416 kubelet[2802]: E1212 18:14:21.764339 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:14:21.764416 kubelet[2802]: E1212 18:14:21.764401 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:14:21.765051 kubelet[2802]: E1212 18:14:21.764527 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6hbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8ggwr_calico-system(f46da395-0309-47b8-bfd7-ce69c3c79781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:21.765982 kubelet[2802]: E1212 18:14:21.765909 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:14:24.483938 kubelet[2802]: E1212 18:14:24.483828 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d7c5d55f9-7mw7q" podUID="f5d7327f-3d2b-4ade-8746-8210d015da61" Dec 12 18:14:29.480097 kubelet[2802]: E1212 18:14:29.479498 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:14:31.480597 kubelet[2802]: E1212 18:14:31.480550 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:14:31.482055 kubelet[2802]: E1212 18:14:31.481177 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:14:33.480244 kubelet[2802]: E1212 18:14:33.480192 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:14:35.481210 kubelet[2802]: E1212 18:14:35.481125 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:14:37.553754 kubelet[2802]: E1212 18:14:37.553612 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:38.479965 containerd[1629]: time="2025-12-12T18:14:38.479806938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:14:38.618881 containerd[1629]: time="2025-12-12T18:14:38.618813654Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:38.620383 containerd[1629]: time="2025-12-12T18:14:38.620326303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:14:38.620383 containerd[1629]: time="2025-12-12T18:14:38.620357183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:38.620684 kubelet[2802]: E1212 18:14:38.620630 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:14:38.621152 kubelet[2802]: E1212 18:14:38.620725 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:14:38.622342 kubelet[2802]: E1212 18:14:38.621379 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3f4f98888c449589a809d6f0e403cbb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rdd5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d7c5d55f9-7mw7q_calico-system(f5d7327f-3d2b-4ade-8746-8210d015da61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:38.624007 containerd[1629]: time="2025-12-12T18:14:38.623988146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:14:38.751510 containerd[1629]: time="2025-12-12T18:14:38.751260109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:38.752404 containerd[1629]: time="2025-12-12T18:14:38.752368516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:14:38.752495 containerd[1629]: time="2025-12-12T18:14:38.752455797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:38.752642 kubelet[2802]: E1212 18:14:38.752602 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:14:38.752710 kubelet[2802]: E1212 18:14:38.752663 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:14:38.752807 kubelet[2802]: E1212 18:14:38.752769 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdd5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d7c5d55f9-7mw7q_calico-system(f5d7327f-3d2b-4ade-8746-8210d015da61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:38.754238 kubelet[2802]: E1212 18:14:38.754203 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d7c5d55f9-7mw7q" podUID="f5d7327f-3d2b-4ade-8746-8210d015da61" Dec 12 18:14:42.481494 containerd[1629]: time="2025-12-12T18:14:42.481450643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:14:42.635860 containerd[1629]: time="2025-12-12T18:14:42.635653317Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:42.637101 containerd[1629]: time="2025-12-12T18:14:42.637042204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:14:42.637179 containerd[1629]: time="2025-12-12T18:14:42.637165294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:42.637413 kubelet[2802]: E1212 18:14:42.637376 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:14:42.637763 kubelet[2802]: E1212 18:14:42.637426 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:14:42.638057 kubelet[2802]: E1212 18:14:42.638022 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh5cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cbcf5f67f-bmmdp_calico-apiserver(415d4ab0-257f-4751-838c-4b86e1cd5e79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:42.638613 containerd[1629]: time="2025-12-12T18:14:42.638593591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:14:42.639158 kubelet[2802]: E1212 18:14:42.639131 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:14:42.772075 containerd[1629]: time="2025-12-12T18:14:42.771923145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:42.773258 containerd[1629]: time="2025-12-12T18:14:42.773210981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:14:42.773323 containerd[1629]: time="2025-12-12T18:14:42.773285412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:42.774539 kubelet[2802]: E1212 18:14:42.774469 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:14:42.774539 kubelet[2802]: E1212 18:14:42.774512 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:14:42.775472 kubelet[2802]: E1212 18:14:42.775422 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxlxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cbcf5f67f-dvvv5_calico-apiserver(b35a9cda-d256-490b-8223-d4936abd6ff5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:42.776688 kubelet[2802]: E1212 18:14:42.776643 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:14:44.483402 containerd[1629]: time="2025-12-12T18:14:44.482255087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:14:44.616634 containerd[1629]: time="2025-12-12T18:14:44.616533688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:44.619676 containerd[1629]: time="2025-12-12T18:14:44.618365896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:44.619676 containerd[1629]: time="2025-12-12T18:14:44.618399266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:14:44.620558 kubelet[2802]: E1212 18:14:44.620486 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:14:44.622345 kubelet[2802]: E1212 18:14:44.620575 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:14:44.622345 kubelet[2802]: E1212 18:14:44.620889 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csz9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-86pl8_calico-system(c713cd34-08f7-480c-b91d-bedb3b68bb36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:44.623102 kubelet[2802]: E1212 18:14:44.622276 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:14:45.480536 kubelet[2802]: E1212 18:14:45.480499 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:47.483899 containerd[1629]: time="2025-12-12T18:14:47.483667404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:14:47.631931 containerd[1629]: time="2025-12-12T18:14:47.631731804Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:47.633045 containerd[1629]: time="2025-12-12T18:14:47.632759128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:14:47.633223 containerd[1629]: time="2025-12-12T18:14:47.633033289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:47.633492 kubelet[2802]: E1212 18:14:47.633450 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:14:47.634978 kubelet[2802]: E1212 18:14:47.633505 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:14:47.634978 kubelet[2802]: E1212 18:14:47.633742 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6hbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8ggwr_calico-system(f46da395-0309-47b8-bfd7-ce69c3c79781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:47.635151 containerd[1629]: time="2025-12-12T18:14:47.634252283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:14:47.772704 containerd[1629]: time="2025-12-12T18:14:47.772452779Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:47.773560 containerd[1629]: time="2025-12-12T18:14:47.773450023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:14:47.773560 containerd[1629]: time="2025-12-12T18:14:47.773535363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:47.773797 kubelet[2802]: E1212 18:14:47.773733 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:14:47.773862 kubelet[2802]: E1212 18:14:47.773797 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:14:47.774431 containerd[1629]: time="2025-12-12T18:14:47.774165935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:14:47.774521 kubelet[2802]: E1212 18:14:47.774443 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9kts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6ccfb466b6-9s5wz_calico-system(38a7f512-410f-47be-bd45-a402f5067f03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:47.775987 kubelet[2802]: E1212 18:14:47.775673 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:14:47.897958 containerd[1629]: time="2025-12-12T18:14:47.897788131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:14:47.898762 containerd[1629]: time="2025-12-12T18:14:47.898679704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:14:47.898762 containerd[1629]: time="2025-12-12T18:14:47.898736414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:14:47.899151 kubelet[2802]: E1212 18:14:47.898920 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:14:47.899200 kubelet[2802]: E1212 18:14:47.899157 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:14:47.899324 kubelet[2802]: E1212 18:14:47.899244 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6hbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8ggwr_calico-system(f46da395-0309-47b8-bfd7-ce69c3c79781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:14:47.903457 kubelet[2802]: E1212 18:14:47.903414 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:14:48.479529 kubelet[2802]: E1212 18:14:48.479408 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:14:53.480986 kubelet[2802]: E1212 18:14:53.480910 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d7c5d55f9-7mw7q" podUID="f5d7327f-3d2b-4ade-8746-8210d015da61" Dec 12 18:14:53.481664 kubelet[2802]: E1212 18:14:53.481120 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:14:54.479666 kubelet[2802]: E1212 18:14:54.479417 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:14:56.481652 kubelet[2802]: E1212 18:14:56.481570 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:15:00.481728 kubelet[2802]: E1212 18:15:00.481563 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:15:00.482529 kubelet[2802]: E1212 18:15:00.482389 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:15:01.484096 kubelet[2802]: E1212 18:15:01.483867 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:15:04.479009 kubelet[2802]: E1212 18:15:04.478977 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:15:06.481205 kubelet[2802]: E1212 18:15:06.481137 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d7c5d55f9-7mw7q" podUID="f5d7327f-3d2b-4ade-8746-8210d015da61" Dec 12 18:15:07.483748 kubelet[2802]: E1212 18:15:07.483713 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:15:07.486735 kubelet[2802]: E1212 18:15:07.484565 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:15:10.482895 kubelet[2802]: E1212 18:15:10.482751 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:15:11.480076 kubelet[2802]: E1212 18:15:11.480032 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:15:11.483406 kubelet[2802]: E1212 18:15:11.481929 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:15:15.485172 kubelet[2802]: E1212 18:15:15.485059 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:15:17.482236 kubelet[2802]: E1212 18:15:17.482140 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d7c5d55f9-7mw7q" podUID="f5d7327f-3d2b-4ade-8746-8210d015da61" Dec 12 18:15:18.480331 kubelet[2802]: E1212 18:15:18.480108 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:15:18.480621 kubelet[2802]: E1212 18:15:18.480573 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:15:21.483003 kubelet[2802]: E1212 18:15:21.482277 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:15:23.479320 kubelet[2802]: E1212 18:15:23.479096 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:15:26.480067 kubelet[2802]: E1212 18:15:26.479989 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:15:29.483238 containerd[1629]: time="2025-12-12T18:15:29.483191548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:15:29.613987 containerd[1629]: time="2025-12-12T18:15:29.613812494Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:29.614945 containerd[1629]: time="2025-12-12T18:15:29.614824659Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:15:29.615051 containerd[1629]: time="2025-12-12T18:15:29.615031695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:29.615455 kubelet[2802]: E1212 18:15:29.615400 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:15:29.616557 kubelet[2802]: E1212 18:15:29.615538 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:15:29.616557 kubelet[2802]: E1212 18:15:29.616023 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e3f4f98888c449589a809d6f0e403cbb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rdd5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d7c5d55f9-7mw7q_calico-system(f5d7327f-3d2b-4ade-8746-8210d015da61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:29.618580 containerd[1629]: time="2025-12-12T18:15:29.618536011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:15:29.749648 containerd[1629]: time="2025-12-12T18:15:29.749334345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:29.750619 containerd[1629]: time="2025-12-12T18:15:29.750417938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:15:29.750619 containerd[1629]: time="2025-12-12T18:15:29.750576935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:29.751133 kubelet[2802]: E1212 18:15:29.751062 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:15:29.751201 kubelet[2802]: E1212 18:15:29.751141 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:15:29.751469 kubelet[2802]: E1212 18:15:29.751286 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdd5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d7c5d55f9-7mw7q_calico-system(f5d7327f-3d2b-4ade-8746-8210d015da61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:29.753382 kubelet[2802]: E1212 18:15:29.753326 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d7c5d55f9-7mw7q" podUID="f5d7327f-3d2b-4ade-8746-8210d015da61" Dec 12 18:15:31.480814 containerd[1629]: time="2025-12-12T18:15:31.480761083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:15:31.625878 containerd[1629]: time="2025-12-12T18:15:31.625815411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:31.627323 containerd[1629]: time="2025-12-12T18:15:31.627178931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:15:31.627323 containerd[1629]: time="2025-12-12T18:15:31.627236600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:31.627633 kubelet[2802]: E1212 18:15:31.627591 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:15:31.627968 kubelet[2802]: E1212 18:15:31.627651 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:15:31.628428 kubelet[2802]: E1212 18:15:31.628363 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxlxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cbcf5f67f-dvvv5_calico-apiserver(b35a9cda-d256-490b-8223-d4936abd6ff5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:31.629613 kubelet[2802]: E1212 18:15:31.629565 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:15:32.479119 kubelet[2802]: E1212 18:15:32.479026 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:15:33.483852 containerd[1629]: time="2025-12-12T18:15:33.483796847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:15:33.620942 containerd[1629]: time="2025-12-12T18:15:33.620881664Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:33.622000 containerd[1629]: time="2025-12-12T18:15:33.621967069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:15:33.622061 containerd[1629]: time="2025-12-12T18:15:33.622041508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:33.622328 kubelet[2802]: E1212 18:15:33.622240 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:15:33.622646 kubelet[2802]: E1212 18:15:33.622340 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:15:33.622646 kubelet[2802]: E1212 18:15:33.622483 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh5cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6cbcf5f67f-bmmdp_calico-apiserver(415d4ab0-257f-4751-838c-4b86e1cd5e79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:33.624636 kubelet[2802]: E1212 18:15:33.624607 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:15:34.481707 containerd[1629]: time="2025-12-12T18:15:34.481624243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:15:34.612482 containerd[1629]: time="2025-12-12T18:15:34.612422144Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:34.613675 containerd[1629]: time="2025-12-12T18:15:34.613582938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:15:34.613892 containerd[1629]: time="2025-12-12T18:15:34.613869864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:34.614346 kubelet[2802]: E1212 18:15:34.614281 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:15:34.614517 kubelet[2802]: E1212 18:15:34.614356 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:15:34.615565 containerd[1629]: time="2025-12-12T18:15:34.615289395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:15:34.615624 kubelet[2802]: E1212 18:15:34.615368 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9kts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6ccfb466b6-9s5wz_calico-system(38a7f512-410f-47be-bd45-a402f5067f03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:34.617389 kubelet[2802]: E1212 18:15:34.617187 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:15:34.755935 containerd[1629]: time="2025-12-12T18:15:34.755560787Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:34.756997 containerd[1629]: time="2025-12-12T18:15:34.756934218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:15:34.757180 containerd[1629]: time="2025-12-12T18:15:34.757031007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:34.757456 kubelet[2802]: E1212 18:15:34.757398 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:15:34.758860 kubelet[2802]: E1212 18:15:34.757646 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:15:34.758860 kubelet[2802]: E1212 18:15:34.758555 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csz9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-86pl8_calico-system(c713cd34-08f7-480c-b91d-bedb3b68bb36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:34.760325 kubelet[2802]: E1212 18:15:34.760273 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:15:35.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.234.28.21:22-139.178.89.65:60060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:35.130348 systemd[1]: Started sshd@7-172.234.28.21:22-139.178.89.65:60060.service - OpenSSH per-connection server daemon (139.178.89.65:60060). Dec 12 18:15:35.133494 kernel: kauditd_printk_skb: 194 callbacks suppressed Dec 12 18:15:35.133547 kernel: audit: type=1130 audit(1765563335.130:752): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.234.28.21:22-139.178.89.65:60060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:35.484000 audit[5208]: USER_ACCT pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:35.492383 kernel: audit: type=1101 audit(1765563335.484:753): pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:35.486569 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:15:35.492847 sshd[5208]: Accepted publickey for core from 139.178.89.65 port 60060 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:15:35.484000 audit[5208]: CRED_ACQ pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:35.500461 systemd-logind[1603]: New session 8 of user core. Dec 12 18:15:35.507332 kernel: audit: type=1103 audit(1765563335.484:754): pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:35.510596 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:15:35.514318 kernel: audit: type=1006 audit(1765563335.484:755): pid=5208 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 12 18:15:35.484000 audit[5208]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc44742d00 a2=3 a3=0 items=0 ppid=1 pid=5208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:35.525311 kernel: audit: type=1300 audit(1765563335.484:755): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc44742d00 a2=3 a3=0 items=0 ppid=1 pid=5208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:35.484000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:35.529316 kernel: audit: type=1327 audit(1765563335.484:755): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:35.516000 audit[5208]: USER_START pid=5208 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:35.539334 kernel: audit: type=1105 audit(1765563335.516:756): pid=5208 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:35.517000 audit[5211]: CRED_ACQ pid=5211 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:35.549560 kernel: audit: type=1103 audit(1765563335.517:757): pid=5211 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:35.738332 sshd[5211]: Connection closed by 139.178.89.65 port 60060 Dec 12 18:15:35.737231 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Dec 12 18:15:35.739000 audit[5208]: USER_END pid=5208 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:35.749329 kernel: audit: type=1106 audit(1765563335.739:758): pid=5208 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:35.745000 audit[5208]: CRED_DISP pid=5208 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:35.750728 systemd-logind[1603]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:15:35.753016 systemd[1]: sshd@7-172.234.28.21:22-139.178.89.65:60060.service: Deactivated successfully. Dec 12 18:15:35.756425 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:15:35.760425 kernel: audit: type=1104 audit(1765563335.745:759): pid=5208 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:35.760428 systemd-logind[1603]: Removed session 8. Dec 12 18:15:35.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.234.28.21:22-139.178.89.65:60060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:38.480417 containerd[1629]: time="2025-12-12T18:15:38.480360931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:15:38.616971 containerd[1629]: time="2025-12-12T18:15:38.616358930Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:38.618038 containerd[1629]: time="2025-12-12T18:15:38.617997619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:15:38.618523 containerd[1629]: time="2025-12-12T18:15:38.618326685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:38.618723 kubelet[2802]: E1212 18:15:38.618684 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:15:38.620960 kubelet[2802]: E1212 18:15:38.618776 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:15:38.621126 kubelet[2802]: E1212 18:15:38.620436 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6hbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8ggwr_calico-system(f46da395-0309-47b8-bfd7-ce69c3c79781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:38.624452 containerd[1629]: time="2025-12-12T18:15:38.623508902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:15:38.756417 containerd[1629]: time="2025-12-12T18:15:38.756214481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:15:38.757859 containerd[1629]: time="2025-12-12T18:15:38.757743542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:15:38.757859 containerd[1629]: time="2025-12-12T18:15:38.757827211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 18:15:38.758293 kubelet[2802]: E1212 18:15:38.758200 2802 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:15:38.758293 kubelet[2802]: E1212 18:15:38.758267 2802 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:15:38.759544 kubelet[2802]: E1212 18:15:38.759380 2802 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6hbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8ggwr_calico-system(f46da395-0309-47b8-bfd7-ce69c3c79781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:15:38.760766 kubelet[2802]: E1212 18:15:38.760722 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:15:40.802788 systemd[1]: Started sshd@8-172.234.28.21:22-139.178.89.65:38902.service - OpenSSH per-connection server daemon (139.178.89.65:38902). Dec 12 18:15:40.811128 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:15:40.811379 kernel: audit: type=1130 audit(1765563340.802:761): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.234.28.21:22-139.178.89.65:38902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:40.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.234.28.21:22-139.178.89.65:38902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:41.109000 audit[5249]: USER_ACCT pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:41.110592 sshd[5249]: Accepted publickey for core from 139.178.89.65 port 38902 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:15:41.114155 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:15:41.118349 kernel: audit: type=1101 audit(1765563341.109:762): pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:41.112000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:41.131357 systemd-logind[1603]: New session 9 of user core. Dec 12 18:15:41.132336 kernel: audit: type=1103 audit(1765563341.112:763): pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:41.136368 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:15:41.141244 kernel: audit: type=1006 audit(1765563341.112:764): pid=5249 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 12 18:15:41.112000 audit[5249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc1272f60 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:41.174392 kernel: audit: type=1300 audit(1765563341.112:764): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc1272f60 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:41.112000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:41.187337 kernel: audit: type=1327 audit(1765563341.112:764): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:41.143000 audit[5249]: USER_START pid=5249 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:41.197333 kernel: audit: type=1105 audit(1765563341.143:765): pid=5249 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:41.174000 audit[5252]: CRED_ACQ pid=5252 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:41.206338 kernel: audit: type=1103 audit(1765563341.174:766): pid=5252 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:41.354919 sshd[5252]: Connection closed by 139.178.89.65 port 38902 Dec 12 18:15:41.356609 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Dec 12 18:15:41.358000 audit[5249]: USER_END pid=5249 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:41.362894 systemd-logind[1603]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:15:41.367069 systemd[1]: sshd@8-172.234.28.21:22-139.178.89.65:38902.service: Deactivated successfully. Dec 12 18:15:41.368340 kernel: audit: type=1106 audit(1765563341.358:767): pid=5249 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:41.371031 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:15:41.374462 systemd-logind[1603]: Removed session 9. Dec 12 18:15:41.359000 audit[5249]: CRED_DISP pid=5249 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:41.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.234.28.21:22-139.178.89.65:38902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:41.382346 kernel: audit: type=1104 audit(1765563341.359:768): pid=5249 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:42.481967 kubelet[2802]: E1212 18:15:42.481851 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d7c5d55f9-7mw7q" podUID="f5d7327f-3d2b-4ade-8746-8210d015da61" Dec 12 18:15:43.480620 kubelet[2802]: E1212 18:15:43.480314 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:15:44.482924 kubelet[2802]: E1212 18:15:44.480608 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:15:44.482924 kubelet[2802]: E1212 18:15:44.480557 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:15:45.482493 kubelet[2802]: E1212 18:15:45.481867 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:15:45.484520 kubelet[2802]: E1212 18:15:45.483388 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:15:46.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.234.28.21:22-139.178.89.65:38910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:46.414766 systemd[1]: Started sshd@9-172.234.28.21:22-139.178.89.65:38910.service - OpenSSH per-connection server daemon (139.178.89.65:38910). Dec 12 18:15:46.415743 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:15:46.415781 kernel: audit: type=1130 audit(1765563346.414:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.234.28.21:22-139.178.89.65:38910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:46.709000 audit[5265]: USER_ACCT pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:46.710584 sshd[5265]: Accepted publickey for core from 139.178.89.65 port 38910 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:15:46.718352 kernel: audit: type=1101 audit(1765563346.709:771): pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:46.720055 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:15:46.719000 audit[5265]: CRED_ACQ pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:46.727362 kernel: audit: type=1103 audit(1765563346.719:772): pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:46.727423 kernel: audit: type=1006 audit(1765563346.719:773): pid=5265 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 12 18:15:46.719000 audit[5265]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7a1b8040 a2=3 a3=0 items=0 ppid=1 pid=5265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:46.733275 kernel: audit: type=1300 audit(1765563346.719:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7a1b8040 a2=3 a3=0 items=0 ppid=1 pid=5265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:46.740318 kernel: audit: type=1327 audit(1765563346.719:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:46.719000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:46.744795 systemd-logind[1603]: New session 10 of user core. Dec 12 18:15:46.753499 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:15:46.759000 audit[5265]: USER_START pid=5265 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:46.768321 kernel: audit: type=1105 audit(1765563346.759:774): pid=5265 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:46.770000 audit[5268]: CRED_ACQ pid=5268 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:46.777322 kernel: audit: type=1103 audit(1765563346.770:775): pid=5268 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:46.959187 sshd[5268]: Connection closed by 139.178.89.65 port 38910 Dec 12 18:15:46.958501 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Dec 12 18:15:46.959000 audit[5265]: USER_END pid=5265 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:46.969329 kernel: audit: type=1106 audit(1765563346.959:776): pid=5265 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:46.971751 systemd[1]: sshd@9-172.234.28.21:22-139.178.89.65:38910.service: Deactivated successfully. Dec 12 18:15:46.974099 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:15:46.975459 systemd-logind[1603]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:15:46.959000 audit[5265]: CRED_DISP pid=5265 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:46.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.234.28.21:22-139.178.89.65:38910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:46.986314 kernel: audit: type=1104 audit(1765563346.959:777): pid=5265 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:46.986443 systemd-logind[1603]: Removed session 10. Dec 12 18:15:47.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.234.28.21:22-139.178.89.65:38926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:47.019216 systemd[1]: Started sshd@10-172.234.28.21:22-139.178.89.65:38926.service - OpenSSH per-connection server daemon (139.178.89.65:38926). Dec 12 18:15:47.317000 audit[5282]: USER_ACCT pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:47.318922 sshd[5282]: Accepted publickey for core from 139.178.89.65 port 38926 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:15:47.319000 audit[5282]: CRED_ACQ pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:47.319000 audit[5282]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd24934780 a2=3 a3=0 items=0 ppid=1 pid=5282 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:47.319000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:47.320022 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:15:47.325559 systemd-logind[1603]: New session 11 of user core. Dec 12 18:15:47.332465 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:15:47.334000 audit[5282]: USER_START pid=5282 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:47.336000 audit[5285]: CRED_ACQ pid=5285 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:47.631951 sshd[5285]: Connection closed by 139.178.89.65 port 38926 Dec 12 18:15:47.632489 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Dec 12 18:15:47.635000 audit[5282]: USER_END pid=5282 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:47.635000 audit[5282]: CRED_DISP pid=5282 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:47.638770 systemd-logind[1603]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:15:47.640089 systemd[1]: sshd@10-172.234.28.21:22-139.178.89.65:38926.service: Deactivated successfully. Dec 12 18:15:47.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.234.28.21:22-139.178.89.65:38926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:47.643284 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:15:47.649816 systemd-logind[1603]: Removed session 11. Dec 12 18:15:47.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.234.28.21:22-139.178.89.65:38932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:47.693779 systemd[1]: Started sshd@11-172.234.28.21:22-139.178.89.65:38932.service - OpenSSH per-connection server daemon (139.178.89.65:38932). Dec 12 18:15:48.007000 audit[5298]: USER_ACCT pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:48.007986 sshd[5298]: Accepted publickey for core from 139.178.89.65 port 38932 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:15:48.009000 audit[5298]: CRED_ACQ pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:48.009000 audit[5298]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4c27a3a0 a2=3 a3=0 items=0 ppid=1 pid=5298 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:48.009000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:48.010049 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:15:48.018994 systemd-logind[1603]: New session 12 of user core. Dec 12 18:15:48.024568 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:15:48.032000 audit[5298]: USER_START pid=5298 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:48.034000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:48.270777 sshd[5316]: Connection closed by 139.178.89.65 port 38932 Dec 12 18:15:48.271520 sshd-session[5298]: pam_unix(sshd:session): session closed for user core Dec 12 18:15:48.272000 audit[5298]: USER_END pid=5298 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:48.272000 audit[5298]: CRED_DISP pid=5298 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:48.277106 systemd-logind[1603]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:15:48.277641 systemd[1]: sshd@11-172.234.28.21:22-139.178.89.65:38932.service: Deactivated successfully. Dec 12 18:15:48.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.234.28.21:22-139.178.89.65:38932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:48.280500 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:15:48.284201 systemd-logind[1603]: Removed session 12. Dec 12 18:15:49.479314 kubelet[2802]: E1212 18:15:49.479253 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:15:50.479818 kubelet[2802]: E1212 18:15:50.479578 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:15:50.481384 kubelet[2802]: E1212 18:15:50.481230 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:15:53.341639 systemd[1]: Started sshd@12-172.234.28.21:22-139.178.89.65:37472.service - OpenSSH per-connection server daemon (139.178.89.65:37472). Dec 12 18:15:53.342692 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 18:15:53.342727 kernel: audit: type=1130 audit(1765563353.340:797): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.234.28.21:22-139.178.89.65:37472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:53.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.234.28.21:22-139.178.89.65:37472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:53.653000 audit[5334]: USER_ACCT pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:53.656709 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:15:53.663571 kernel: audit: type=1101 audit(1765563353.653:798): pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:53.663615 sshd[5334]: Accepted publickey for core from 139.178.89.65 port 37472 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:15:53.654000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:53.664982 systemd-logind[1603]: New session 13 of user core. Dec 12 18:15:53.672960 kernel: audit: type=1103 audit(1765563353.654:799): pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:53.673004 kernel: audit: type=1006 audit(1765563353.654:800): pid=5334 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 18:15:53.673509 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:15:53.654000 audit[5334]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7a344570 a2=3 a3=0 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.679497 kernel: audit: type=1300 audit(1765563353.654:800): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7a344570 a2=3 a3=0 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:53.654000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:53.688723 kernel: audit: type=1327 audit(1765563353.654:800): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:53.688784 kernel: audit: type=1105 audit(1765563353.677:801): pid=5334 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:53.677000 audit[5334]: USER_START pid=5334 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:53.680000 audit[5337]: CRED_ACQ pid=5337 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:53.703484 kernel: audit: type=1103 audit(1765563353.680:802): pid=5337 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:53.878529 sshd[5337]: Connection closed by 139.178.89.65 port 37472 Dec 12 18:15:53.879150 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Dec 12 18:15:53.890388 kernel: audit: type=1106 audit(1765563353.879:803): pid=5334 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:53.879000 audit[5334]: USER_END pid=5334 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:53.885912 systemd[1]: sshd@12-172.234.28.21:22-139.178.89.65:37472.service: Deactivated successfully. Dec 12 18:15:53.890012 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:15:53.879000 audit[5334]: CRED_DISP pid=5334 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:53.893273 systemd-logind[1603]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:15:53.894945 systemd-logind[1603]: Removed session 13. Dec 12 18:15:53.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.234.28.21:22-139.178.89.65:37472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:53.899320 kernel: audit: type=1104 audit(1765563353.879:804): pid=5334 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:53.954639 systemd[1]: Started sshd@13-172.234.28.21:22-139.178.89.65:37482.service - OpenSSH per-connection server daemon (139.178.89.65:37482). Dec 12 18:15:53.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.234.28.21:22-139.178.89.65:37482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:54.263000 audit[5349]: USER_ACCT pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:54.265643 sshd[5349]: Accepted publickey for core from 139.178.89.65 port 37482 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:15:54.267000 audit[5349]: CRED_ACQ pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:54.267000 audit[5349]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffff666f60 a2=3 a3=0 items=0 ppid=1 pid=5349 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:54.267000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:54.268836 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:15:54.275396 systemd-logind[1603]: New session 14 of user core. Dec 12 18:15:54.282597 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:15:54.287000 audit[5349]: USER_START pid=5349 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:54.289000 audit[5352]: CRED_ACQ pid=5352 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:54.612483 sshd[5352]: Connection closed by 139.178.89.65 port 37482 Dec 12 18:15:54.614541 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Dec 12 18:15:54.615000 audit[5349]: USER_END pid=5349 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:54.615000 audit[5349]: CRED_DISP pid=5349 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:54.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.234.28.21:22-139.178.89.65:37482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:54.619343 systemd-logind[1603]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:15:54.619955 systemd[1]: sshd@13-172.234.28.21:22-139.178.89.65:37482.service: Deactivated successfully. Dec 12 18:15:54.623895 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:15:54.628097 systemd-logind[1603]: Removed session 14. Dec 12 18:15:54.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.234.28.21:22-139.178.89.65:37496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:54.687718 systemd[1]: Started sshd@14-172.234.28.21:22-139.178.89.65:37496.service - OpenSSH per-connection server daemon (139.178.89.65:37496). Dec 12 18:15:55.018000 audit[5361]: USER_ACCT pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:55.020368 sshd[5361]: Accepted publickey for core from 139.178.89.65 port 37496 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:15:55.022000 audit[5361]: CRED_ACQ pid=5361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:55.022000 audit[5361]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6f1a4610 a2=3 a3=0 items=0 ppid=1 pid=5361 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.022000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:55.024087 sshd-session[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:15:55.031632 systemd-logind[1603]: New session 15 of user core. Dec 12 18:15:55.037449 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:15:55.041000 audit[5361]: USER_START pid=5361 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:55.044000 audit[5364]: CRED_ACQ pid=5364 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:55.481698 kubelet[2802]: E1212 18:15:55.481587 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d7c5d55f9-7mw7q" podUID="f5d7327f-3d2b-4ade-8746-8210d015da61" Dec 12 18:15:55.482980 kubelet[2802]: E1212 18:15:55.482830 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:15:55.708000 audit[5374]: NETFILTER_CFG table=filter:135 family=2 entries=26 op=nft_register_rule pid=5374 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:55.708000 audit[5374]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd8e3478a0 a2=0 a3=7ffd8e34788c items=0 ppid=2909 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:55.713000 audit[5374]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5374 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:55.713000 audit[5374]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd8e3478a0 a2=0 a3=0 items=0 ppid=2909 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:55.730000 audit[5376]: NETFILTER_CFG table=filter:137 family=2 entries=38 op=nft_register_rule pid=5376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:55.730000 audit[5376]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc1daa0cb0 a2=0 a3=7ffc1daa0c9c items=0 ppid=2909 pid=5376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:55.735000 audit[5376]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:15:55.735000 audit[5376]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc1daa0cb0 a2=0 a3=0 items=0 ppid=2909 pid=5376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:55.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:15:55.747614 sshd[5364]: Connection closed by 139.178.89.65 port 37496 Dec 12 18:15:55.749538 sshd-session[5361]: pam_unix(sshd:session): session closed for user core Dec 12 18:15:55.749000 audit[5361]: USER_END pid=5361 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:55.749000 audit[5361]: CRED_DISP pid=5361 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:55.753121 systemd[1]: sshd@14-172.234.28.21:22-139.178.89.65:37496.service: Deactivated successfully. Dec 12 18:15:55.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.234.28.21:22-139.178.89.65:37496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:55.755423 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:15:55.756875 systemd-logind[1603]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:15:55.758664 systemd-logind[1603]: Removed session 15. Dec 12 18:15:55.814388 systemd[1]: Started sshd@15-172.234.28.21:22-139.178.89.65:37500.service - OpenSSH per-connection server daemon (139.178.89.65:37500). Dec 12 18:15:55.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.234.28.21:22-139.178.89.65:37500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:56.120000 audit[5381]: USER_ACCT pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:56.122399 sshd[5381]: Accepted publickey for core from 139.178.89.65 port 37500 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:15:56.122000 audit[5381]: CRED_ACQ pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:56.122000 audit[5381]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3c6d4410 a2=3 a3=0 items=0 ppid=1 pid=5381 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.122000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:56.124215 sshd-session[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:15:56.130871 systemd-logind[1603]: New session 16 of user core. Dec 12 18:15:56.137586 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:15:56.141000 audit[5381]: USER_START pid=5381 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:56.144000 audit[5384]: CRED_ACQ pid=5384 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:56.475518 sshd[5384]: Connection closed by 139.178.89.65 port 37500 Dec 12 18:15:56.476659 sshd-session[5381]: pam_unix(sshd:session): session closed for user core Dec 12 18:15:56.477000 audit[5381]: USER_END pid=5381 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:56.477000 audit[5381]: CRED_DISP pid=5381 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:56.483936 systemd-logind[1603]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:15:56.485709 systemd[1]: sshd@15-172.234.28.21:22-139.178.89.65:37500.service: Deactivated successfully. Dec 12 18:15:56.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.234.28.21:22-139.178.89.65:37500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:56.489601 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:15:56.493518 systemd-logind[1603]: Removed session 16. Dec 12 18:15:56.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.234.28.21:22-139.178.89.65:37504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:56.540027 systemd[1]: Started sshd@16-172.234.28.21:22-139.178.89.65:37504.service - OpenSSH per-connection server daemon (139.178.89.65:37504). Dec 12 18:15:56.841000 audit[5394]: USER_ACCT pid=5394 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:56.843165 sshd[5394]: Accepted publickey for core from 139.178.89.65 port 37504 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:15:56.842000 audit[5394]: CRED_ACQ pid=5394 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:56.842000 audit[5394]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda5436440 a2=3 a3=0 items=0 ppid=1 pid=5394 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:15:56.842000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:15:56.844909 sshd-session[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:15:56.854751 systemd-logind[1603]: New session 17 of user core. Dec 12 18:15:56.860688 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:15:56.864000 audit[5394]: USER_START pid=5394 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:56.867000 audit[5397]: CRED_ACQ pid=5397 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:57.093772 sshd[5397]: Connection closed by 139.178.89.65 port 37504 Dec 12 18:15:57.093519 sshd-session[5394]: pam_unix(sshd:session): session closed for user core Dec 12 18:15:57.096000 audit[5394]: USER_END pid=5394 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:57.097000 audit[5394]: CRED_DISP pid=5394 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:15:57.101347 systemd-logind[1603]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:15:57.102144 systemd[1]: sshd@16-172.234.28.21:22-139.178.89.65:37504.service: Deactivated successfully. Dec 12 18:15:57.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.234.28.21:22-139.178.89.65:37504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:15:57.107729 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:15:57.112370 systemd-logind[1603]: Removed session 17. Dec 12 18:15:57.482050 kubelet[2802]: E1212 18:15:57.481618 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:15:58.480961 kubelet[2802]: E1212 18:15:58.480659 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:15:59.482855 kubelet[2802]: E1212 18:15:59.482414 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:16:01.033000 audit[5409]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:16:01.037139 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 12 18:16:01.037208 kernel: audit: type=1325 audit(1765563361.033:846): table=filter:139 family=2 entries=26 op=nft_register_rule pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:16:01.033000 audit[5409]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff9b84a3a0 a2=0 a3=7fff9b84a38c items=0 ppid=2909 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:16:01.052322 kernel: audit: type=1300 audit(1765563361.033:846): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff9b84a3a0 a2=0 a3=7fff9b84a38c items=0 ppid=2909 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:16:01.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:16:01.061067 kernel: audit: type=1327 audit(1765563361.033:846): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:16:01.061137 kernel: audit: type=1325 audit(1765563361.042:847): table=nat:140 family=2 entries=104 op=nft_register_chain pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:16:01.042000 audit[5409]: NETFILTER_CFG table=nat:140 family=2 entries=104 op=nft_register_chain pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 18:16:01.042000 audit[5409]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff9b84a3a0 a2=0 a3=7fff9b84a38c items=0 ppid=2909 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:16:01.077566 kernel: audit: type=1300 audit(1765563361.042:847): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff9b84a3a0 a2=0 a3=7fff9b84a38c items=0 ppid=2909 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:16:01.077613 kernel: audit: type=1327 audit(1765563361.042:847): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:16:01.042000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 18:16:02.163880 systemd[1]: Started sshd@17-172.234.28.21:22-139.178.89.65:51922.service - OpenSSH per-connection server daemon (139.178.89.65:51922). Dec 12 18:16:02.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.234.28.21:22-139.178.89.65:51922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:16:02.171532 kernel: audit: type=1130 audit(1765563362.162:848): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.234.28.21:22-139.178.89.65:51922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:16:02.475000 audit[5413]: USER_ACCT pid=5413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:02.479892 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:16:02.485199 sshd[5413]: Accepted publickey for core from 139.178.89.65 port 51922 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:16:02.485388 kernel: audit: type=1101 audit(1765563362.475:849): pid=5413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:02.478000 audit[5413]: CRED_ACQ pid=5413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:02.498350 kernel: audit: type=1103 audit(1765563362.478:850): pid=5413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:02.501473 systemd-logind[1603]: New session 18 of user core. Dec 12 18:16:02.506238 kernel: audit: type=1006 audit(1765563362.478:851): pid=5413 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 12 18:16:02.478000 audit[5413]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddeac0850 a2=3 a3=0 items=0 ppid=1 pid=5413 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:16:02.478000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:16:02.508455 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:16:02.512000 audit[5413]: USER_START pid=5413 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:02.514000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:02.717176 sshd[5416]: Connection closed by 139.178.89.65 port 51922 Dec 12 18:16:02.718555 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Dec 12 18:16:02.719000 audit[5413]: USER_END pid=5413 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:02.720000 audit[5413]: CRED_DISP pid=5413 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:02.724640 systemd[1]: sshd@17-172.234.28.21:22-139.178.89.65:51922.service: Deactivated successfully. Dec 12 18:16:02.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.234.28.21:22-139.178.89.65:51922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:16:02.727250 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:16:02.728946 systemd-logind[1603]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:16:02.731263 systemd-logind[1603]: Removed session 18. Dec 12 18:16:03.483519 kubelet[2802]: E1212 18:16:03.483478 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781" Dec 12 18:16:07.784778 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 18:16:07.784875 kernel: audit: type=1130 audit(1765563367.780:857): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.234.28.21:22-139.178.89.65:51938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:16:07.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.234.28.21:22-139.178.89.65:51938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:16:07.781677 systemd[1]: Started sshd@18-172.234.28.21:22-139.178.89.65:51938.service - OpenSSH per-connection server daemon (139.178.89.65:51938). Dec 12 18:16:08.077000 audit[5455]: USER_ACCT pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:08.080967 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:16:08.087156 sshd[5455]: Accepted publickey for core from 139.178.89.65 port 51938 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:16:08.087390 kernel: audit: type=1101 audit(1765563368.077:858): pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:08.077000 audit[5455]: CRED_ACQ pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:08.095630 kernel: audit: type=1103 audit(1765563368.077:859): pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:08.095662 kernel: audit: type=1006 audit(1765563368.077:860): pid=5455 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 12 18:16:08.077000 audit[5455]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed95b9b90 a2=3 a3=0 items=0 ppid=1 pid=5455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:16:08.100668 kernel: audit: type=1300 audit(1765563368.077:860): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed95b9b90 a2=3 a3=0 items=0 ppid=1 pid=5455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:16:08.103410 systemd-logind[1603]: New session 19 of user core. Dec 12 18:16:08.077000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:16:08.110339 kernel: audit: type=1327 audit(1765563368.077:860): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:16:08.111732 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:16:08.115000 audit[5455]: USER_START pid=5455 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:08.124370 kernel: audit: type=1105 audit(1765563368.115:861): pid=5455 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:08.123000 audit[5458]: CRED_ACQ pid=5458 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:08.131320 kernel: audit: type=1103 audit(1765563368.123:862): pid=5458 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:08.287025 sshd[5458]: Connection closed by 139.178.89.65 port 51938 Dec 12 18:16:08.288519 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Dec 12 18:16:08.288000 audit[5455]: USER_END pid=5455 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:08.317061 systemd[1]: sshd@18-172.234.28.21:22-139.178.89.65:51938.service: Deactivated successfully. Dec 12 18:16:08.321401 kernel: audit: type=1106 audit(1765563368.288:863): pid=5455 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:08.290000 audit[5455]: CRED_DISP pid=5455 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:08.326208 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:16:08.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.234.28.21:22-139.178.89.65:51938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:16:08.328345 kernel: audit: type=1104 audit(1765563368.290:864): pid=5455 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:08.329516 systemd-logind[1603]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:16:08.331539 systemd-logind[1603]: Removed session 19. Dec 12 18:16:08.478761 kubelet[2802]: E1212 18:16:08.478658 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:16:09.484023 kubelet[2802]: E1212 18:16:09.483864 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-86pl8" podUID="c713cd34-08f7-480c-b91d-bedb3b68bb36" Dec 12 18:16:09.484023 kubelet[2802]: E1212 18:16:09.483938 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-dvvv5" podUID="b35a9cda-d256-490b-8223-d4936abd6ff5" Dec 12 18:16:09.486425 kubelet[2802]: E1212 18:16:09.485666 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d7c5d55f9-7mw7q" podUID="f5d7327f-3d2b-4ade-8746-8210d015da61" Dec 12 18:16:11.482992 kubelet[2802]: E1212 18:16:11.482892 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6cbcf5f67f-bmmdp" podUID="415d4ab0-257f-4751-838c-4b86e1cd5e79" Dec 12 18:16:12.479364 kubelet[2802]: E1212 18:16:12.479009 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Dec 12 18:16:13.364345 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 18:16:13.364440 kernel: audit: type=1130 audit(1765563373.357:866): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.234.28.21:22-139.178.89.65:53676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:16:13.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.234.28.21:22-139.178.89.65:53676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:16:13.358527 systemd[1]: Started sshd@19-172.234.28.21:22-139.178.89.65:53676.service - OpenSSH per-connection server daemon (139.178.89.65:53676). Dec 12 18:16:13.480322 kubelet[2802]: E1212 18:16:13.480156 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6ccfb466b6-9s5wz" podUID="38a7f512-410f-47be-bd45-a402f5067f03" Dec 12 18:16:13.666008 sshd[5470]: Accepted publickey for core from 139.178.89.65 port 53676 ssh2: RSA SHA256:biCYIFFbOggB/YdF4Mf0WJcpIc5G7ySr2IdN9HHR8SA Dec 12 18:16:13.664000 audit[5470]: USER_ACCT pid=5470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:13.676601 kernel: audit: type=1101 audit(1765563373.664:867): pid=5470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:13.677087 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:16:13.674000 audit[5470]: CRED_ACQ pid=5470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:13.693945 systemd-logind[1603]: New session 20 of user core. Dec 12 18:16:13.694398 kernel: audit: type=1103 audit(1765563373.674:868): pid=5470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:13.701608 kernel: audit: type=1006 audit(1765563373.674:869): pid=5470 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 12 18:16:13.701658 kernel: audit: type=1300 audit(1765563373.674:869): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb08966d0 a2=3 a3=0 items=0 ppid=1 pid=5470 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:16:13.674000 audit[5470]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb08966d0 a2=3 a3=0 items=0 ppid=1 pid=5470 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 18:16:13.702175 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:16:13.674000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:16:13.713000 audit[5470]: USER_START pid=5470 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:13.718681 kernel: audit: type=1327 audit(1765563373.674:869): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 18:16:13.718739 kernel: audit: type=1105 audit(1765563373.713:870): pid=5470 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:13.723000 audit[5473]: CRED_ACQ pid=5473 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:13.726525 kernel: audit: type=1103 audit(1765563373.723:871): pid=5473 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:13.921845 sshd[5473]: Connection closed by 139.178.89.65 port 53676 Dec 12 18:16:13.923230 sshd-session[5470]: pam_unix(sshd:session): session closed for user core Dec 12 18:16:13.925000 audit[5470]: USER_END pid=5470 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:13.935326 kernel: audit: type=1106 audit(1765563373.925:872): pid=5470 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:13.933000 audit[5470]: CRED_DISP pid=5470 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:13.938531 systemd[1]: sshd@19-172.234.28.21:22-139.178.89.65:53676.service: Deactivated successfully. Dec 12 18:16:13.943392 kernel: audit: type=1104 audit(1765563373.933:873): pid=5470 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 18:16:13.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.234.28.21:22-139.178.89.65:53676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 18:16:13.944390 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:16:13.946228 systemd-logind[1603]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:16:13.948080 systemd-logind[1603]: Removed session 20. Dec 12 18:16:15.489349 kubelet[2802]: E1212 18:16:15.488465 2802 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8ggwr" podUID="f46da395-0309-47b8-bfd7-ce69c3c79781"