Dec 16 12:59:23.067245 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 16 12:59:23.067269 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 12:59:23.067278 kernel: BIOS-provided physical RAM map: Dec 16 12:59:23.067284 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Dec 16 12:59:23.067305 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Dec 16 12:59:23.067312 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 12:59:23.067322 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 16 12:59:23.067328 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 16 12:59:23.067335 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 12:59:23.067341 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 12:59:23.067347 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 12:59:23.067353 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 12:59:23.067359 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Dec 16 12:59:23.067366 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 12:59:23.067375 kernel: NX (Execute Disable) protection: active Dec 16 12:59:23.067382 kernel: APIC: Static calls initialized Dec 16 12:59:23.067389 kernel: SMBIOS 2.8 present. Dec 16 12:59:23.067396 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Dec 16 12:59:23.067402 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:59:23.067411 kernel: Hypervisor detected: KVM Dec 16 12:59:23.067417 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 16 12:59:23.067424 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 12:59:23.067431 kernel: kvm-clock: using sched offset of 6023187390 cycles Dec 16 12:59:23.067437 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:59:23.067445 kernel: tsc: Detected 2000.000 MHz processor Dec 16 12:59:23.067452 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 12:59:23.067460 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 12:59:23.067468 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Dec 16 12:59:23.067476 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 12:59:23.067483 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 12:59:23.067490 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 16 12:59:23.067497 kernel: Using GB pages for direct mapping Dec 16 12:59:23.067504 kernel: ACPI: Early table checksum verification disabled Dec 16 12:59:23.067511 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Dec 16 12:59:23.067518 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:59:23.067527 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:59:23.067534 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:59:23.067541 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 16 12:59:23.067548 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:59:23.067555 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:59:23.067565 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:59:23.067574 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:59:23.067582 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Dec 16 12:59:23.067589 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Dec 16 12:59:23.067596 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 16 12:59:23.067604 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Dec 16 12:59:23.067613 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Dec 16 12:59:23.067620 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Dec 16 12:59:23.067627 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Dec 16 12:59:23.067634 kernel: No NUMA configuration found Dec 16 12:59:23.067860 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Dec 16 12:59:23.067873 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Dec 16 12:59:23.067881 kernel: Zone ranges: Dec 16 12:59:23.067888 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 12:59:23.067899 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 12:59:23.067906 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Dec 16 12:59:23.067913 kernel: Device empty Dec 16 12:59:23.067920 kernel: Movable zone start for each node Dec 16 12:59:23.067928 kernel: Early memory node ranges Dec 16 12:59:23.067935 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 12:59:23.067942 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 16 12:59:23.067951 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Dec 16 12:59:23.067959 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Dec 16 12:59:23.067966 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 12:59:23.067973 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 12:59:23.067981 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Dec 16 12:59:23.067988 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 12:59:23.067995 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 12:59:23.068002 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 12:59:23.068012 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 12:59:23.068019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 12:59:23.068026 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 12:59:23.068034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 12:59:23.068041 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 12:59:23.068048 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 12:59:23.068055 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 12:59:23.068065 kernel: TSC deadline timer available Dec 16 12:59:23.068072 kernel: CPU topo: Max. logical packages: 1 Dec 16 12:59:23.068079 kernel: CPU topo: Max. logical dies: 1 Dec 16 12:59:23.068086 kernel: CPU topo: Max. dies per package: 1 Dec 16 12:59:23.068093 kernel: CPU topo: Max. threads per core: 1 Dec 16 12:59:23.068101 kernel: CPU topo: Num. cores per package: 2 Dec 16 12:59:23.068108 kernel: CPU topo: Num. threads per package: 2 Dec 16 12:59:23.068117 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 12:59:23.068124 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 12:59:23.068131 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 12:59:23.068139 kernel: kvm-guest: setup PV sched yield Dec 16 12:59:23.068146 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 12:59:23.068153 kernel: Booting paravirtualized kernel on KVM Dec 16 12:59:23.068161 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 12:59:23.068168 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 12:59:23.068177 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 12:59:23.068185 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 12:59:23.068192 kernel: pcpu-alloc: [0] 0 1 Dec 16 12:59:23.068199 kernel: kvm-guest: PV spinlocks enabled Dec 16 12:59:23.068206 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 12:59:23.068214 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 12:59:23.068224 kernel: random: crng init done Dec 16 12:59:23.068231 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:59:23.068239 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:59:23.068246 kernel: Fallback order for Node 0: 0 Dec 16 12:59:23.068253 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Dec 16 12:59:23.068261 kernel: Policy zone: Normal Dec 16 12:59:23.068268 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:59:23.068275 kernel: software IO TLB: area num 2. Dec 16 12:59:23.068285 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:59:23.068306 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 12:59:23.068313 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 12:59:23.068320 kernel: Dynamic Preempt: voluntary Dec 16 12:59:23.068327 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:59:23.068335 kernel: rcu: RCU event tracing is enabled. Dec 16 12:59:23.068343 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:59:23.068353 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:59:23.068360 kernel: Rude variant of Tasks RCU enabled. Dec 16 12:59:23.068367 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:59:23.068375 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:59:23.068382 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:59:23.068389 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:59:23.068405 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:59:23.068413 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:59:23.068421 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 12:59:23.068428 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:59:23.068438 kernel: Console: colour VGA+ 80x25 Dec 16 12:59:23.068445 kernel: printk: legacy console [tty0] enabled Dec 16 12:59:23.068453 kernel: printk: legacy console [ttyS0] enabled Dec 16 12:59:23.068461 kernel: ACPI: Core revision 20240827 Dec 16 12:59:23.068470 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 12:59:23.068478 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 12:59:23.068485 kernel: x2apic enabled Dec 16 12:59:23.068493 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 12:59:23.068686 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 12:59:23.068694 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 12:59:23.068701 kernel: kvm-guest: setup PV IPIs Dec 16 12:59:23.068711 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 12:59:23.068719 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 16 12:59:23.068726 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Dec 16 12:59:23.068734 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 12:59:23.068741 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 12:59:23.068749 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 12:59:23.068756 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 12:59:23.068766 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 12:59:23.068774 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 12:59:23.068781 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 12:59:23.068789 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 12:59:23.068796 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 12:59:23.068804 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 12:59:23.068814 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 12:59:23.068822 kernel: active return thunk: srso_alias_return_thunk Dec 16 12:59:23.068829 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 12:59:23.068837 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Dec 16 12:59:23.068844 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 12:59:23.068852 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 12:59:23.068859 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 12:59:23.068869 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 12:59:23.068877 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 16 12:59:23.068884 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 12:59:23.068892 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Dec 16 12:59:23.068899 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Dec 16 12:59:23.068907 kernel: Freeing SMP alternatives memory: 32K Dec 16 12:59:23.068914 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:59:23.068924 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:59:23.068931 kernel: landlock: Up and running. Dec 16 12:59:23.068939 kernel: SELinux: Initializing. Dec 16 12:59:23.068946 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:59:23.068955 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:59:23.068963 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Dec 16 12:59:23.068970 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 12:59:23.068979 kernel: ... version: 0 Dec 16 12:59:23.068987 kernel: ... bit width: 48 Dec 16 12:59:23.068995 kernel: ... generic registers: 6 Dec 16 12:59:23.069002 kernel: ... value mask: 0000ffffffffffff Dec 16 12:59:23.069010 kernel: ... max period: 00007fffffffffff Dec 16 12:59:23.069017 kernel: ... fixed-purpose events: 0 Dec 16 12:59:23.069038 kernel: ... event mask: 000000000000003f Dec 16 12:59:23.069046 kernel: signal: max sigframe size: 3376 Dec 16 12:59:23.069055 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:59:23.069063 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:59:23.069070 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:59:23.069078 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:59:23.069085 kernel: smpboot: x86: Booting SMP configuration: Dec 16 12:59:23.069093 kernel: .... node #0, CPUs: #1 Dec 16 12:59:23.069101 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:59:23.069110 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Dec 16 12:59:23.069118 kernel: Memory: 3979480K/4193772K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 208864K reserved, 0K cma-reserved) Dec 16 12:59:23.069126 kernel: devtmpfs: initialized Dec 16 12:59:23.069134 kernel: x86/mm: Memory block size: 128MB Dec 16 12:59:23.069141 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:59:23.069149 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:59:23.069156 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:59:23.069166 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:59:23.069174 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:59:23.069181 kernel: audit: type=2000 audit(1765889960.790:1): state=initialized audit_enabled=0 res=1 Dec 16 12:59:23.069189 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:59:23.069196 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 12:59:23.069204 kernel: cpuidle: using governor menu Dec 16 12:59:23.069211 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:59:23.069221 kernel: dca service started, version 1.12.1 Dec 16 12:59:23.069229 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 12:59:23.069236 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 12:59:23.069244 kernel: PCI: Using configuration type 1 for base access Dec 16 12:59:23.069252 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 12:59:23.069260 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:59:23.069267 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:59:23.069277 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:59:23.069284 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:59:23.069304 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:59:23.069312 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:59:23.069319 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:59:23.069327 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:59:23.069334 kernel: ACPI: Interpreter enabled Dec 16 12:59:23.069530 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 12:59:23.069537 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 12:59:23.069545 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 12:59:23.069552 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 12:59:23.069560 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 12:59:23.069567 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:59:23.069807 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:59:23.070001 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 12:59:23.070184 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 12:59:23.070195 kernel: PCI host bridge to bus 0000:00 Dec 16 12:59:23.070393 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 12:59:23.070560 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 12:59:23.070728 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 12:59:23.070890 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 16 12:59:23.071050 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 12:59:23.071216 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Dec 16 12:59:23.071399 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:59:23.071596 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:59:23.071845 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 12:59:23.072028 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 16 12:59:23.072209 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 16 12:59:23.072421 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 16 12:59:23.072598 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 12:59:23.072783 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 16 12:59:23.073381 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Dec 16 12:59:23.073630 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 16 12:59:23.073838 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 16 12:59:23.074042 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 12:59:23.074220 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 16 12:59:23.074438 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 16 12:59:23.074617 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 16 12:59:23.074793 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 16 12:59:23.074977 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 12:59:23.075152 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 12:59:23.075375 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 12:59:23.075617 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Dec 16 12:59:23.075796 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Dec 16 12:59:23.075978 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 12:59:23.076153 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 12:59:23.076164 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 12:59:23.076175 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 12:59:23.076183 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 12:59:23.076191 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 12:59:23.076198 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 12:59:23.076206 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 12:59:23.076214 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 12:59:23.076221 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 12:59:23.076231 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 12:59:23.076238 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 12:59:23.076246 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 12:59:23.076253 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 12:59:23.076261 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 12:59:23.076269 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 12:59:23.076276 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 12:59:23.076286 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 12:59:23.076308 kernel: iommu: Default domain type: Translated Dec 16 12:59:23.076316 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 12:59:23.076323 kernel: PCI: Using ACPI for IRQ routing Dec 16 12:59:23.076331 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 12:59:23.076338 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Dec 16 12:59:23.076346 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 16 12:59:23.076527 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 12:59:23.076701 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 12:59:23.076873 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 12:59:23.076883 kernel: vgaarb: loaded Dec 16 12:59:23.076891 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 12:59:23.076899 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 12:59:23.076907 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 12:59:23.076917 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:59:23.076925 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:59:23.076933 kernel: pnp: PnP ACPI init Dec 16 12:59:23.077117 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 12:59:23.077129 kernel: pnp: PnP ACPI: found 5 devices Dec 16 12:59:23.077137 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 12:59:23.077145 kernel: NET: Registered PF_INET protocol family Dec 16 12:59:23.077155 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:59:23.077163 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:59:23.077170 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:59:23.077178 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:59:23.077186 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:59:23.077193 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:59:23.077201 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:59:23.077211 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:59:23.077218 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:59:23.077226 kernel: NET: Registered PF_XDP protocol family Dec 16 12:59:23.077408 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 12:59:23.078419 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 12:59:23.078594 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 12:59:23.078763 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 16 12:59:23.078924 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 12:59:23.079104 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Dec 16 12:59:23.079116 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:59:23.079124 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 12:59:23.079131 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Dec 16 12:59:23.079139 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 16 12:59:23.079150 kernel: Initialise system trusted keyrings Dec 16 12:59:23.079159 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:59:23.079166 kernel: Key type asymmetric registered Dec 16 12:59:23.079174 kernel: Asymmetric key parser 'x509' registered Dec 16 12:59:23.079181 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 12:59:23.079189 kernel: io scheduler mq-deadline registered Dec 16 12:59:23.079197 kernel: io scheduler kyber registered Dec 16 12:59:23.079206 kernel: io scheduler bfq registered Dec 16 12:59:23.079214 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 12:59:23.079222 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 12:59:23.079230 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 12:59:23.079237 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:59:23.079245 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 12:59:23.079253 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 12:59:23.079260 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 12:59:23.079270 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 12:59:23.079469 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 12:59:23.079482 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 12:59:23.079649 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 12:59:23.079818 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T12:59:21 UTC (1765889961) Dec 16 12:59:23.079986 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 16 12:59:23.080000 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 12:59:23.080008 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:59:23.080015 kernel: Segment Routing with IPv6 Dec 16 12:59:23.080023 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:59:23.080031 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:59:23.080039 kernel: Key type dns_resolver registered Dec 16 12:59:23.080046 kernel: IPI shorthand broadcast: enabled Dec 16 12:59:23.080056 kernel: sched_clock: Marking stable (1765004170, 347002050)->(2207150910, -95144690) Dec 16 12:59:23.080064 kernel: registered taskstats version 1 Dec 16 12:59:23.080072 kernel: Loading compiled-in X.509 certificates Dec 16 12:59:23.080079 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 16 12:59:23.080087 kernel: Demotion targets for Node 0: null Dec 16 12:59:23.080094 kernel: Key type .fscrypt registered Dec 16 12:59:23.080102 kernel: Key type fscrypt-provisioning registered Dec 16 12:59:23.080111 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:59:23.080119 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:59:23.080126 kernel: ima: No architecture policies found Dec 16 12:59:23.080134 kernel: clk: Disabling unused clocks Dec 16 12:59:23.080141 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 16 12:59:23.080149 kernel: Write protecting the kernel read-only data: 45056k Dec 16 12:59:23.080156 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 16 12:59:23.080166 kernel: Run /init as init process Dec 16 12:59:23.080174 kernel: with arguments: Dec 16 12:59:23.080181 kernel: /init Dec 16 12:59:23.080189 kernel: with environment: Dec 16 12:59:23.080197 kernel: HOME=/ Dec 16 12:59:23.080220 kernel: TERM=linux Dec 16 12:59:23.080230 kernel: SCSI subsystem initialized Dec 16 12:59:23.080240 kernel: libata version 3.00 loaded. Dec 16 12:59:23.090364 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 12:59:23.090382 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 12:59:23.090564 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 12:59:23.090743 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 12:59:23.090922 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 12:59:23.091131 kernel: scsi host0: ahci Dec 16 12:59:23.091364 kernel: scsi host1: ahci Dec 16 12:59:23.091558 kernel: scsi host2: ahci Dec 16 12:59:23.091749 kernel: scsi host3: ahci Dec 16 12:59:23.091936 kernel: scsi host4: ahci Dec 16 12:59:23.092124 kernel: scsi host5: ahci Dec 16 12:59:23.092140 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Dec 16 12:59:23.092148 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Dec 16 12:59:23.092157 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Dec 16 12:59:23.092165 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Dec 16 12:59:23.092173 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Dec 16 12:59:23.092181 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Dec 16 12:59:23.092191 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 12:59:23.092199 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 12:59:23.092207 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 12:59:23.092215 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 12:59:23.092223 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 12:59:23.092230 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 12:59:23.092726 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Dec 16 12:59:23.092927 kernel: scsi host6: Virtio SCSI HBA Dec 16 12:59:23.093138 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 16 12:59:23.093360 kernel: sd 6:0:0:0: Power-on or device reset occurred Dec 16 12:59:23.093559 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Dec 16 12:59:23.093753 kernel: sd 6:0:0:0: [sda] Write Protect is off Dec 16 12:59:23.093946 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 16 12:59:23.094144 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 12:59:23.094156 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:59:23.094164 kernel: GPT:25804799 != 167739391 Dec 16 12:59:23.094172 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:59:23.094180 kernel: GPT:25804799 != 167739391 Dec 16 12:59:23.094187 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:59:23.094198 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:59:23.094442 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Dec 16 12:59:23.094455 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:59:23.094464 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:59:23.094472 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:59:23.094480 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 12:59:23.094488 kernel: raid6: avx2x4 gen() 35192 MB/s Dec 16 12:59:23.094500 kernel: raid6: avx2x2 gen() 33001 MB/s Dec 16 12:59:23.094508 kernel: raid6: avx2x1 gen() 25792 MB/s Dec 16 12:59:23.094516 kernel: raid6: using algorithm avx2x4 gen() 35192 MB/s Dec 16 12:59:23.094524 kernel: raid6: .... xor() 4971 MB/s, rmw enabled Dec 16 12:59:23.094533 kernel: raid6: using avx2x2 recovery algorithm Dec 16 12:59:23.094541 kernel: xor: automatically using best checksumming function avx Dec 16 12:59:23.094549 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:59:23.094557 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (166) Dec 16 12:59:23.094566 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 16 12:59:23.094576 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:59:23.094584 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 12:59:23.094594 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:59:23.094602 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:59:23.094610 kernel: loop: module loaded Dec 16 12:59:23.094618 kernel: loop0: detected capacity change from 0 to 100136 Dec 16 12:59:23.094626 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:59:23.094635 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:59:23.094646 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:59:23.094657 systemd[1]: Detected virtualization kvm. Dec 16 12:59:23.094665 systemd[1]: Detected architecture x86-64. Dec 16 12:59:23.094673 systemd[1]: Running in initrd. Dec 16 12:59:23.094681 systemd[1]: No hostname configured, using default hostname. Dec 16 12:59:23.094690 systemd[1]: Hostname set to . Dec 16 12:59:23.094699 systemd[1]: Initializing machine ID from random generator. Dec 16 12:59:23.094709 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:59:23.094717 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:59:23.094726 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:59:23.094734 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:59:23.094744 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:59:23.094752 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:59:23.094763 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:59:23.094772 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:59:23.094780 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:59:23.094789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:59:23.094797 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:59:23.094806 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:59:23.094816 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:59:23.094825 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:59:23.094834 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:59:23.094842 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:59:23.094851 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:59:23.094859 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:59:23.094867 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:59:23.094878 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:59:23.094886 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:59:23.094895 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:59:23.094903 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:59:23.094912 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:59:23.094921 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:59:23.094929 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:59:23.094939 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:59:23.094948 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:59:23.094957 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:59:23.094965 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:59:23.094974 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:59:23.094982 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:59:23.095014 systemd-journald[304]: Collecting audit messages is enabled. Dec 16 12:59:23.095036 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:59:23.095045 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:59:23.095053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:59:23.095062 kernel: audit: type=1130 audit(1765889963.077:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.095073 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:59:23.095082 systemd-journald[304]: Journal started Dec 16 12:59:23.095101 systemd-journald[304]: Runtime Journal (/run/log/journal/2007338df7f34c2f937e4d2e8f91403d) is 8M, max 78.1M, 70.1M free. Dec 16 12:59:23.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.105163 kernel: audit: type=1130 audit(1765889963.090:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.105187 kernel: audit: type=1130 audit(1765889963.102:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.105199 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:59:23.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.116384 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:59:23.122062 kernel: audit: type=1130 audit(1765889963.113:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.129393 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:59:23.132733 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:59:23.141968 systemd-modules-load[307]: Inserted module 'br_netfilter' Dec 16 12:59:23.143507 kernel: Bridge firewalling registered Dec 16 12:59:23.144932 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:59:23.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.152421 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:59:23.159179 kernel: audit: type=1130 audit(1765889963.145:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.157636 systemd-tmpfiles[315]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:59:23.166642 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:59:23.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.176557 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:59:23.178521 kernel: audit: type=1130 audit(1765889963.168:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.178539 kernel: audit: type=1130 audit(1765889963.177:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.187340 kernel: audit: type=1130 audit(1765889963.185:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.185389 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:59:23.197680 kernel: audit: type=1334 audit(1765889963.187:10): prog-id=6 op=LOAD Dec 16 12:59:23.187000 audit: BPF prog-id=6 op=LOAD Dec 16 12:59:23.190320 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:59:23.199899 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:59:23.245900 systemd-resolved[327]: Positive Trust Anchors: Dec 16 12:59:23.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.245909 systemd-resolved[327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:59:23.245914 systemd-resolved[327]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:59:23.245941 systemd-resolved[327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:59:23.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.265905 systemd-resolved[327]: Defaulting to hostname 'linux'. Dec 16 12:59:23.283797 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:59:23.290741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:59:23.293866 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:59:23.298004 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:59:23.304199 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:59:23.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.320637 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:59:23.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.322831 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:59:23.349489 dracut-cmdline[344]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 12:59:23.450315 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:59:23.465345 kernel: iscsi: registered transport (tcp) Dec 16 12:59:23.487954 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:59:23.487991 kernel: QLogic iSCSI HBA Driver Dec 16 12:59:23.514268 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:59:23.546223 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:59:23.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.548596 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:59:23.593505 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:59:23.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.596427 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:59:23.598716 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:59:23.635363 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:59:23.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.638000 audit: BPF prog-id=7 op=LOAD Dec 16 12:59:23.638000 audit: BPF prog-id=8 op=LOAD Dec 16 12:59:23.639235 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:59:23.667053 systemd-udevd[587]: Using default interface naming scheme 'v257'. Dec 16 12:59:23.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.680260 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:59:23.685576 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:59:23.713236 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:59:23.715681 dracut-pre-trigger[658]: rd.md=0: removing MD RAID activation Dec 16 12:59:23.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.715000 audit: BPF prog-id=9 op=LOAD Dec 16 12:59:23.716854 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:59:23.752451 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:59:23.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.756438 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:59:23.765066 systemd-networkd[693]: lo: Link UP Dec 16 12:59:23.765078 systemd-networkd[693]: lo: Gained carrier Dec 16 12:59:23.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.765887 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:59:23.767436 systemd[1]: Reached target network.target - Network. Dec 16 12:59:23.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:23.852797 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:59:23.857478 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:59:23.959944 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 16 12:59:23.996900 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 16 12:59:24.127199 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 16 12:59:24.154845 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 12:59:24.166826 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 12:59:24.170743 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:59:24.176046 kernel: AES CTR mode by8 optimization enabled Dec 16 12:59:24.185433 systemd-networkd[693]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:59:24.185444 systemd-networkd[693]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:59:24.186162 systemd-networkd[693]: eth0: Link UP Dec 16 12:59:24.189323 systemd-networkd[693]: eth0: Gained carrier Dec 16 12:59:24.189335 systemd-networkd[693]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:59:24.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:24.200154 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:59:24.220401 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 12:59:24.200265 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:59:24.221958 disk-uuid[787]: Primary Header is updated. Dec 16 12:59:24.221958 disk-uuid[787]: Secondary Entries is updated. Dec 16 12:59:24.221958 disk-uuid[787]: Secondary Header is updated. Dec 16 12:59:24.202154 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:59:24.220705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:59:24.389753 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:59:24.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:24.401743 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:59:24.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:24.403621 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:59:24.404455 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:59:24.405173 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:59:24.408439 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:59:24.441465 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:59:24.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.039373 systemd-networkd[693]: eth0: DHCPv4 address 172.239.193.244/24, gateway 172.239.193.1 acquired from 23.213.15.250 Dec 16 12:59:25.281607 disk-uuid[794]: Warning: The kernel is still using the old partition table. Dec 16 12:59:25.281607 disk-uuid[794]: The new table will be used at the next reboot or after you Dec 16 12:59:25.281607 disk-uuid[794]: run partprobe(8) or kpartx(8) Dec 16 12:59:25.281607 disk-uuid[794]: The operation has completed successfully. Dec 16 12:59:25.289946 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:59:25.290142 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:59:25.309261 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 12:59:25.309284 kernel: audit: type=1130 audit(1765889965.291:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.309326 kernel: audit: type=1131 audit(1765889965.291:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.294455 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:59:25.346321 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (855) Dec 16 12:59:25.346355 kernel: BTRFS info (device sda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:59:25.350343 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:59:25.359910 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 12:59:25.359948 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:59:25.359962 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:59:25.372316 kernel: BTRFS info (device sda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:59:25.372893 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:59:25.384119 kernel: audit: type=1130 audit(1765889965.373:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.376426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:59:25.405850 systemd-networkd[693]: eth0: Gained IPv6LL Dec 16 12:59:25.525940 ignition[874]: Ignition 2.22.0 Dec 16 12:59:25.525953 ignition[874]: Stage: fetch-offline Dec 16 12:59:25.525993 ignition[874]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:59:25.526005 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 12:59:25.526096 ignition[874]: parsed url from cmdline: "" Dec 16 12:59:25.529634 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:59:25.539124 kernel: audit: type=1130 audit(1765889965.530:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.526103 ignition[874]: no config URL provided Dec 16 12:59:25.533436 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:59:25.526111 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:59:25.526129 ignition[874]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:59:25.526137 ignition[874]: failed to fetch config: resource requires networking Dec 16 12:59:25.527510 ignition[874]: Ignition finished successfully Dec 16 12:59:25.587488 ignition[882]: Ignition 2.22.0 Dec 16 12:59:25.587502 ignition[882]: Stage: fetch Dec 16 12:59:25.587623 ignition[882]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:59:25.587633 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 12:59:25.587702 ignition[882]: parsed url from cmdline: "" Dec 16 12:59:25.587707 ignition[882]: no config URL provided Dec 16 12:59:25.587713 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:59:25.587721 ignition[882]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:59:25.587761 ignition[882]: PUT http://169.254.169.254/v1/token: attempt #1 Dec 16 12:59:25.682125 ignition[882]: PUT result: OK Dec 16 12:59:25.682201 ignition[882]: GET http://169.254.169.254/v1/user-data: attempt #1 Dec 16 12:59:25.802749 ignition[882]: GET result: OK Dec 16 12:59:25.802882 ignition[882]: parsing config with SHA512: 9999213c8e6c062c34fa396cd9dab850d7c084a271a5ac14a4a20aeeb36c715154b1d8c306f177305951d4aaeb2465e153d92a5884178ec2ec71140237c6f287 Dec 16 12:59:25.808491 unknown[882]: fetched base config from "system" Dec 16 12:59:25.809402 unknown[882]: fetched base config from "system" Dec 16 12:59:25.809453 unknown[882]: fetched user config from "akamai" Dec 16 12:59:25.810356 ignition[882]: fetch: fetch complete Dec 16 12:59:25.810362 ignition[882]: fetch: fetch passed Dec 16 12:59:25.815877 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:59:25.810437 ignition[882]: Ignition finished successfully Dec 16 12:59:25.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.819453 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:59:25.826634 kernel: audit: type=1130 audit(1765889965.817:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.850718 ignition[889]: Ignition 2.22.0 Dec 16 12:59:25.850733 ignition[889]: Stage: kargs Dec 16 12:59:25.850853 ignition[889]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:59:25.850864 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 12:59:25.851618 ignition[889]: kargs: kargs passed Dec 16 12:59:25.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.855175 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:59:25.864450 kernel: audit: type=1130 audit(1765889965.855:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.851658 ignition[889]: Ignition finished successfully Dec 16 12:59:25.858415 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:59:25.883721 ignition[896]: Ignition 2.22.0 Dec 16 12:59:25.883735 ignition[896]: Stage: disks Dec 16 12:59:25.883844 ignition[896]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:59:25.883855 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 12:59:25.887514 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:59:25.896711 kernel: audit: type=1130 audit(1765889965.888:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.884450 ignition[896]: disks: disks passed Dec 16 12:59:25.888972 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:59:25.884490 ignition[896]: Ignition finished successfully Dec 16 12:59:25.897474 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:59:25.899231 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:59:25.900985 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:59:25.902416 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:59:25.905003 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:59:25.951986 systemd-fsck[905]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 12:59:25.956145 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:59:25.965634 kernel: audit: type=1130 audit(1765889965.956:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:25.958215 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:59:26.076320 kernel: EXT4-fs (sda9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 16 12:59:26.076505 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:59:26.077957 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:59:26.080898 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:59:26.083375 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:59:26.085406 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:59:26.085454 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:59:26.085477 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:59:26.101562 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:59:26.103249 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:59:26.113317 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (913) Dec 16 12:59:26.117775 kernel: BTRFS info (device sda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:59:26.117813 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:59:26.129312 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 12:59:26.129336 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:59:26.129348 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:59:26.132953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:59:26.180836 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:59:26.186461 initrd-setup-root[944]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:59:26.191801 initrd-setup-root[951]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:59:26.197242 initrd-setup-root[958]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:59:26.309569 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:59:26.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:26.313380 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:59:26.320869 kernel: audit: type=1130 audit(1765889966.310:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:26.320863 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:59:26.335843 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:59:26.341472 kernel: BTRFS info (device sda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:59:26.360969 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:59:26.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:26.371328 kernel: audit: type=1130 audit(1765889966.362:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:26.373892 ignition[1027]: INFO : Ignition 2.22.0 Dec 16 12:59:26.373892 ignition[1027]: INFO : Stage: mount Dec 16 12:59:26.376112 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:59:26.376112 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 12:59:26.376112 ignition[1027]: INFO : mount: mount passed Dec 16 12:59:26.378858 ignition[1027]: INFO : Ignition finished successfully Dec 16 12:59:26.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:26.378814 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:59:26.380939 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:59:26.417470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:59:26.442455 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1037) Dec 16 12:59:26.442486 kernel: BTRFS info (device sda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:59:26.448995 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:59:26.456624 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 12:59:26.456658 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:59:26.456671 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:59:26.461183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:59:26.493595 ignition[1054]: INFO : Ignition 2.22.0 Dec 16 12:59:26.493595 ignition[1054]: INFO : Stage: files Dec 16 12:59:26.495280 ignition[1054]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:59:26.495280 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 12:59:26.495280 ignition[1054]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:59:26.498699 ignition[1054]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:59:26.498699 ignition[1054]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:59:26.522515 ignition[1054]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:59:26.522515 ignition[1054]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:59:26.522515 ignition[1054]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:59:26.522515 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 12:59:26.522515 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 12:59:26.505757 unknown[1054]: wrote ssh authorized keys file for user: core Dec 16 12:59:26.621111 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:59:26.672060 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 12:59:26.673509 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:59:26.673509 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:59:26.673509 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:59:26.673509 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:59:26.673509 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:59:26.673509 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:59:26.673509 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:59:26.673509 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:59:26.682313 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:59:26.682313 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:59:26.682313 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 12:59:26.682313 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 12:59:26.682313 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 12:59:26.682313 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 12:59:27.113767 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:59:27.288370 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 12:59:27.288370 ignition[1054]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:59:27.290980 ignition[1054]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:59:27.293604 ignition[1054]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:59:27.293604 ignition[1054]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:59:27.293604 ignition[1054]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 12:59:27.297738 ignition[1054]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 12:59:27.297738 ignition[1054]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 12:59:27.297738 ignition[1054]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 12:59:27.297738 ignition[1054]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:59:27.297738 ignition[1054]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:59:27.297738 ignition[1054]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:59:27.297738 ignition[1054]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:59:27.297738 ignition[1054]: INFO : files: files passed Dec 16 12:59:27.297738 ignition[1054]: INFO : Ignition finished successfully Dec 16 12:59:27.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.297255 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:59:27.301595 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:59:27.305583 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:59:27.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.317199 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:59:27.317321 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:59:27.328192 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:59:27.329707 initrd-setup-root-after-ignition[1086]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:59:27.331456 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:59:27.333113 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:59:27.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.334253 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:59:27.336414 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:59:27.392478 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:59:27.392774 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:59:27.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.394552 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:59:27.395992 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:59:27.397926 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:59:27.398961 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:59:27.439589 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:59:27.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.441696 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:59:27.459578 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:59:27.460513 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:59:27.461430 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:59:27.463058 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:59:27.464630 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:59:27.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.464767 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:59:27.466466 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:59:27.467510 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:59:27.469086 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:59:27.470535 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:59:27.471941 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:59:27.473559 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:59:27.475153 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:59:27.476747 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:59:27.478388 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:59:27.479949 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:59:27.481532 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:59:27.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.482996 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:59:27.483134 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:59:27.484814 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:59:27.485849 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:59:27.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.487279 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:59:27.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.487403 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:59:27.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.510065 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:59:27.510166 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:59:27.512262 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:59:27.512429 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:59:27.513440 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:59:27.513539 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:59:27.515533 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:59:27.519493 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:59:27.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.520276 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:59:27.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.523349 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:59:27.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.525520 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:59:27.525662 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:59:27.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.526974 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:59:27.527075 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:59:27.533807 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:59:27.533920 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:59:27.554844 ignition[1110]: INFO : Ignition 2.22.0 Dec 16 12:59:27.556759 ignition[1110]: INFO : Stage: umount Dec 16 12:59:27.556759 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:59:27.556759 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 12:59:27.556759 ignition[1110]: INFO : umount: umount passed Dec 16 12:59:27.556759 ignition[1110]: INFO : Ignition finished successfully Dec 16 12:59:27.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.560065 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:59:27.560664 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:59:27.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.560771 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:59:27.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.562828 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:59:27.562916 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:59:27.563683 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:59:27.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.563734 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:59:27.567706 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:59:27.567760 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:59:27.569239 systemd[1]: Stopped target network.target - Network. Dec 16 12:59:27.570759 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:59:27.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.570816 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:59:27.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.573961 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:59:27.574619 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:59:27.576317 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:59:27.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.577210 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:59:27.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.580377 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:59:27.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.581088 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:59:27.581134 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:59:27.584840 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:59:27.584883 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:59:27.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.587664 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:59:27.587699 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:59:27.608000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:59:27.609000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:59:27.589079 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:59:27.589135 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:59:27.590446 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:59:27.590495 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:59:27.592053 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:59:27.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.593777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:59:27.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.595959 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:59:27.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.596061 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:59:27.598597 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:59:27.598685 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:59:27.601969 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:59:27.602100 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:59:27.605571 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:59:27.605707 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:59:27.609202 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:59:27.610719 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:59:27.610763 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:59:27.612872 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:59:27.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.614730 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:59:27.614789 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:59:27.617853 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:59:27.617907 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:59:27.619477 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:59:27.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.619526 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:59:27.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.621045 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:59:27.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.635439 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:59:27.635598 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:59:27.637761 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:59:27.637829 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:59:27.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.643173 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:59:27.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.643217 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:59:27.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.644592 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:59:27.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.644646 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:59:27.646685 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:59:27.646740 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:59:27.648214 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:59:27.648265 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:59:27.651418 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:59:27.652644 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:59:27.652702 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:59:27.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.655271 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:59:27.655354 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:59:27.656227 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:59:27.656276 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:59:27.657092 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:59:27.657140 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:59:27.658668 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:59:27.658720 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:59:27.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.669169 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:59:27.669278 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:59:27.697711 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:59:27.697830 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:59:27.699818 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:59:27.701813 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:59:27.718941 systemd[1]: Switching root. Dec 16 12:59:27.753330 systemd-journald[304]: Received SIGTERM from PID 1 (systemd). Dec 16 12:59:27.753368 systemd-journald[304]: Journal stopped Dec 16 12:59:28.962191 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:59:28.962218 kernel: SELinux: policy capability open_perms=1 Dec 16 12:59:28.962230 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:59:28.962241 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:59:28.962250 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:59:28.962262 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:59:28.962273 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:59:28.962283 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:59:28.962308 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:59:28.962319 systemd[1]: Successfully loaded SELinux policy in 71.923ms. Dec 16 12:59:28.962331 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.732ms. Dec 16 12:59:28.962345 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:59:28.962356 systemd[1]: Detected virtualization kvm. Dec 16 12:59:28.962366 systemd[1]: Detected architecture x86-64. Dec 16 12:59:28.962380 systemd[1]: Detected first boot. Dec 16 12:59:28.962391 systemd[1]: Initializing machine ID from random generator. Dec 16 12:59:28.962401 zram_generator::config[1158]: No configuration found. Dec 16 12:59:28.962413 kernel: Guest personality initialized and is inactive Dec 16 12:59:28.962424 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 12:59:28.962435 kernel: Initialized host personality Dec 16 12:59:28.962447 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:59:28.962458 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:59:28.962468 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:59:28.962479 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:59:28.962490 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:59:28.962505 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:59:28.962516 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:59:28.962529 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:59:28.962540 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:59:28.962551 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:59:28.962562 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:59:28.962575 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:59:28.962586 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:59:28.962598 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:59:28.962609 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:59:28.962620 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:59:28.962631 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:59:28.962641 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:59:28.962652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:59:28.962665 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 12:59:28.962677 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:59:28.962690 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:59:28.962701 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:59:28.962712 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:59:28.962723 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:59:28.962736 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:59:28.962746 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:59:28.962757 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:59:28.962768 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:59:28.962779 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:59:28.962789 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:59:28.962800 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:59:28.962816 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:59:28.962827 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:59:28.962838 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:59:28.962849 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:59:28.962862 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:59:28.962873 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:59:28.962884 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:59:28.962895 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:59:28.962906 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:59:28.962917 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:59:28.962928 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:59:28.962941 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:59:28.962952 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:59:28.962963 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:59:28.962973 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:59:28.962984 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:59:28.962995 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:59:28.963008 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:59:28.963020 systemd[1]: Reached target machines.target - Containers. Dec 16 12:59:28.963031 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:59:28.963042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:59:28.963054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:59:28.963065 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:59:28.963076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:59:28.963089 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:59:28.963100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:59:28.963111 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:59:28.963122 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:59:28.963133 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:59:28.963144 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:59:28.963155 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:59:28.963168 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:59:28.963179 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:59:28.963190 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:59:28.963201 kernel: fuse: init (API version 7.41) Dec 16 12:59:28.963212 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:59:28.963223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:59:28.963235 kernel: ACPI: bus type drm_connector registered Dec 16 12:59:28.963246 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:59:28.963257 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:59:28.963268 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:59:28.963279 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:59:28.963304 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:59:28.963332 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:59:28.963343 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:59:28.963354 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:59:28.963386 systemd-journald[1243]: Collecting audit messages is enabled. Dec 16 12:59:28.963409 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:59:28.963421 systemd-journald[1243]: Journal started Dec 16 12:59:28.963440 systemd-journald[1243]: Runtime Journal (/run/log/journal/4e807a80f5134ceeaa86c445a63f0830) is 8M, max 78.1M, 70.1M free. Dec 16 12:59:28.677000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:59:28.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.881000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:59:28.881000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:59:28.883000 audit: BPF prog-id=15 op=LOAD Dec 16 12:59:28.883000 audit: BPF prog-id=16 op=LOAD Dec 16 12:59:28.883000 audit: BPF prog-id=17 op=LOAD Dec 16 12:59:28.957000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:59:28.957000 audit[1243]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd6af7a3e0 a2=4000 a3=0 items=0 ppid=1 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:28.957000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:59:28.543087 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:59:28.569223 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 12:59:28.569785 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:59:28.968328 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:59:28.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.970491 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:59:28.971472 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:59:28.972786 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:59:28.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.974197 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:59:28.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.975492 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:59:28.975997 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:59:28.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.977186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:59:28.977884 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:59:28.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.979361 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:59:28.979629 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:59:28.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.980798 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:59:28.981054 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:59:28.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.982683 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:59:28.982950 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:59:28.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.984119 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:59:28.984513 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:59:28.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.985745 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:59:28.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.987162 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:59:28.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.989190 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:59:28.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.990935 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:59:28.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.007201 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:59:29.008991 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:59:29.014383 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:59:29.018041 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:59:29.018861 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:59:29.018949 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:59:29.020522 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:59:29.024932 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:59:29.025060 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:59:29.028425 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:59:29.031491 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:59:29.033425 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:59:29.037445 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:59:29.039440 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:59:29.042518 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:59:29.046530 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:59:29.048626 systemd-journald[1243]: Time spent on flushing to /var/log/journal/4e807a80f5134ceeaa86c445a63f0830 is 41.578ms for 1124 entries. Dec 16 12:59:29.048626 systemd-journald[1243]: System Journal (/var/log/journal/4e807a80f5134ceeaa86c445a63f0830) is 8M, max 588.1M, 580.1M free. Dec 16 12:59:29.105550 systemd-journald[1243]: Received client request to flush runtime journal. Dec 16 12:59:29.105604 kernel: loop1: detected capacity change from 0 to 111544 Dec 16 12:59:29.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.056891 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:59:29.061080 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:59:29.063195 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:59:29.076347 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:59:29.078862 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:59:29.085485 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:59:29.090933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:59:29.113759 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:59:29.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.139348 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:59:29.140256 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Dec 16 12:59:29.140269 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Dec 16 12:59:29.151057 kernel: loop2: detected capacity change from 0 to 119256 Dec 16 12:59:29.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.148920 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:59:29.152074 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:59:29.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.154331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:59:29.182325 kernel: loop3: detected capacity change from 0 to 8 Dec 16 12:59:29.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.198029 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:59:29.203347 kernel: loop4: detected capacity change from 0 to 219144 Dec 16 12:59:29.199000 audit: BPF prog-id=18 op=LOAD Dec 16 12:59:29.200000 audit: BPF prog-id=19 op=LOAD Dec 16 12:59:29.200000 audit: BPF prog-id=20 op=LOAD Dec 16 12:59:29.202481 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:59:29.204000 audit: BPF prog-id=21 op=LOAD Dec 16 12:59:29.207694 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:59:29.211121 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:59:29.224000 audit: BPF prog-id=22 op=LOAD Dec 16 12:59:29.226000 audit: BPF prog-id=23 op=LOAD Dec 16 12:59:29.226000 audit: BPF prog-id=24 op=LOAD Dec 16 12:59:29.229000 audit: BPF prog-id=25 op=LOAD Dec 16 12:59:29.229000 audit: BPF prog-id=26 op=LOAD Dec 16 12:59:29.229000 audit: BPF prog-id=27 op=LOAD Dec 16 12:59:29.228517 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:59:29.231569 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:59:29.251437 kernel: loop5: detected capacity change from 0 to 111544 Dec 16 12:59:29.252673 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Dec 16 12:59:29.252689 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Dec 16 12:59:29.276682 kernel: loop6: detected capacity change from 0 to 119256 Dec 16 12:59:29.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.274732 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:59:29.303316 kernel: loop7: detected capacity change from 0 to 8 Dec 16 12:59:29.309661 kernel: loop1: detected capacity change from 0 to 219144 Dec 16 12:59:29.330186 (sd-merge)[1308]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Dec 16 12:59:29.335165 systemd-nsresourced[1305]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:59:29.336096 (sd-merge)[1308]: Merged extensions into '/usr'. Dec 16 12:59:29.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.346419 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:59:29.348768 systemd[1]: Reload requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:59:29.348778 systemd[1]: Reloading... Dec 16 12:59:29.477428 systemd-oomd[1301]: No swap; memory pressure usage will be degraded Dec 16 12:59:29.487230 zram_generator::config[1351]: No configuration found. Dec 16 12:59:29.519802 systemd-resolved[1302]: Positive Trust Anchors: Dec 16 12:59:29.519823 systemd-resolved[1302]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:59:29.519828 systemd-resolved[1302]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:59:29.519856 systemd-resolved[1302]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:59:29.528402 systemd-resolved[1302]: Defaulting to hostname 'linux'. Dec 16 12:59:29.705751 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:59:29.705841 systemd[1]: Reloading finished in 356 ms. Dec 16 12:59:29.740475 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:59:29.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.741644 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:59:29.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.742584 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:59:29.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.743643 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:59:29.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.744736 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:59:29.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:29.749607 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:59:29.764934 systemd[1]: Starting ensure-sysext.service... Dec 16 12:59:29.768430 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:59:29.770000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:59:29.770000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:59:29.770000 audit: BPF prog-id=28 op=LOAD Dec 16 12:59:29.770000 audit: BPF prog-id=29 op=LOAD Dec 16 12:59:29.772462 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:59:29.775000 audit: BPF prog-id=30 op=LOAD Dec 16 12:59:29.775000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:59:29.775000 audit: BPF prog-id=31 op=LOAD Dec 16 12:59:29.776000 audit: BPF prog-id=32 op=LOAD Dec 16 12:59:29.776000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:59:29.776000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:59:29.778000 audit: BPF prog-id=33 op=LOAD Dec 16 12:59:29.778000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:59:29.778000 audit: BPF prog-id=34 op=LOAD Dec 16 12:59:29.781000 audit: BPF prog-id=35 op=LOAD Dec 16 12:59:29.781000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:59:29.781000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:59:29.782000 audit: BPF prog-id=36 op=LOAD Dec 16 12:59:29.782000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:59:29.782000 audit: BPF prog-id=37 op=LOAD Dec 16 12:59:29.782000 audit: BPF prog-id=38 op=LOAD Dec 16 12:59:29.782000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:59:29.782000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:59:29.783000 audit: BPF prog-id=39 op=LOAD Dec 16 12:59:29.783000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:59:29.786000 audit: BPF prog-id=40 op=LOAD Dec 16 12:59:29.786000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:59:29.786000 audit: BPF prog-id=41 op=LOAD Dec 16 12:59:29.786000 audit: BPF prog-id=42 op=LOAD Dec 16 12:59:29.786000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:59:29.786000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:59:29.795116 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:59:29.795485 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:59:29.795791 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:59:29.796791 systemd[1]: Reload requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:59:29.796805 systemd[1]: Reloading... Dec 16 12:59:29.797090 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Dec 16 12:59:29.797161 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Dec 16 12:59:29.808261 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:59:29.808518 systemd-tmpfiles[1395]: Skipping /boot Dec 16 12:59:29.827290 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:59:29.827399 systemd-tmpfiles[1395]: Skipping /boot Dec 16 12:59:29.849923 systemd-udevd[1396]: Using default interface naming scheme 'v257'. Dec 16 12:59:29.908333 zram_generator::config[1439]: No configuration found. Dec 16 12:59:30.012325 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:59:30.034322 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 12:59:30.063318 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 12:59:30.063650 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 12:59:30.110315 kernel: ACPI: button: Power Button [PWRF] Dec 16 12:59:30.155459 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 12:59:30.156080 systemd[1]: Reloading finished in 358 ms. Dec 16 12:59:30.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.162953 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:59:30.166863 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:59:30.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.177000 audit: BPF prog-id=43 op=LOAD Dec 16 12:59:30.177000 audit: BPF prog-id=44 op=LOAD Dec 16 12:59:30.177000 audit: BPF prog-id=45 op=LOAD Dec 16 12:59:30.179000 audit: BPF prog-id=46 op=LOAD Dec 16 12:59:30.179000 audit: BPF prog-id=47 op=LOAD Dec 16 12:59:30.179000 audit: BPF prog-id=48 op=LOAD Dec 16 12:59:30.180000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:59:30.180000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:59:30.180000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:59:30.180000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:59:30.180000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:59:30.180000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:59:30.181000 audit: BPF prog-id=49 op=LOAD Dec 16 12:59:30.181000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:59:30.181000 audit: BPF prog-id=50 op=LOAD Dec 16 12:59:30.181000 audit: BPF prog-id=51 op=LOAD Dec 16 12:59:30.181000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:59:30.181000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:59:30.183000 audit: BPF prog-id=52 op=LOAD Dec 16 12:59:30.183000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:59:30.183000 audit: BPF prog-id=53 op=LOAD Dec 16 12:59:30.183000 audit: BPF prog-id=54 op=LOAD Dec 16 12:59:30.183000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:59:30.183000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:59:30.184000 audit: BPF prog-id=55 op=LOAD Dec 16 12:59:30.186000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:59:30.187000 audit: BPF prog-id=56 op=LOAD Dec 16 12:59:30.187000 audit: BPF prog-id=57 op=LOAD Dec 16 12:59:30.187000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:59:30.187000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:59:30.215892 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:59:30.221606 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:59:30.225788 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:59:30.228576 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:59:30.230000 audit: BPF prog-id=58 op=LOAD Dec 16 12:59:30.237434 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:59:30.240822 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:59:30.256717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:59:30.262139 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:59:30.262771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:59:30.272455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:59:30.285591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:59:30.294897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:59:30.296845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:59:30.297082 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:59:30.297577 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:59:30.297798 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:59:30.314752 kernel: kauditd_printk_skb: 179 callbacks suppressed Dec 16 12:59:30.314805 kernel: audit: type=1127 audit(1765889970.313:217): pid=1518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.313000 audit[1518]: SYSTEM_BOOT pid=1518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.308333 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:59:30.308535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:59:30.308754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:59:30.308942 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:59:30.309064 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:59:30.310372 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:59:30.348567 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:59:30.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.372614 kernel: audit: type=1130 audit(1765889970.363:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.380380 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:59:30.380673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:59:30.386164 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:59:30.399044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:59:30.400560 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:59:30.400693 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:59:30.400846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:59:30.416841 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:59:30.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.419842 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:59:30.422141 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:59:30.425333 kernel: audit: type=1130 audit(1765889970.418:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.434314 kernel: audit: type=1130 audit(1765889970.426:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.446751 kernel: EDAC MC: Ver: 3.0.0 Dec 16 12:59:30.446798 kernel: audit: type=1131 audit(1765889970.426:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.446274 systemd[1]: Finished ensure-sysext.service. Dec 16 12:59:30.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.453005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:59:30.455343 kernel: audit: type=1130 audit(1765889970.447:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.453996 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:59:30.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.464317 kernel: audit: type=1130 audit(1765889970.456:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.457066 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:59:30.458479 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:59:30.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.475528 kernel: audit: type=1131 audit(1765889970.456:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.492459 kernel: audit: type=1130 audit(1765889970.467:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.492524 kernel: audit: type=1131 audit(1765889970.467:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.525145 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:59:30.526198 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:59:30.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:30.563000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:59:30.563000 audit[1557]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc763fe590 a2=420 a3=0 items=0 ppid=1513 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:30.563000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:59:30.564504 augenrules[1557]: No rules Dec 16 12:59:30.576998 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:59:30.577387 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:59:30.590970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:59:30.591047 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:59:30.596595 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:59:30.600609 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:59:30.601632 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:59:30.626785 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 12:59:30.634449 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:59:30.648230 systemd-networkd[1517]: lo: Link UP Dec 16 12:59:30.648579 systemd-networkd[1517]: lo: Gained carrier Dec 16 12:59:30.651070 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:59:30.651784 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:59:30.651789 systemd-networkd[1517]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:59:30.652741 systemd[1]: Reached target network.target - Network. Dec 16 12:59:30.655967 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:59:30.657511 systemd-networkd[1517]: eth0: Link UP Dec 16 12:59:30.657770 systemd-networkd[1517]: eth0: Gained carrier Dec 16 12:59:30.657784 systemd-networkd[1517]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:59:30.659524 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:59:30.671268 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:59:30.700937 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:59:30.701809 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:59:30.725247 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:59:30.821108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:59:30.952880 ldconfig[1515]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:59:30.956368 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:59:30.958744 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:59:30.978053 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:59:30.979230 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:59:30.980309 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:59:30.981272 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:59:30.982039 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 12:59:30.982946 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:59:30.983947 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:59:30.984782 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:59:30.985858 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:59:30.986595 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:59:30.987354 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:59:30.987389 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:59:30.988181 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:59:30.990939 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:59:30.993375 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:59:30.995920 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:59:30.996845 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:59:30.997621 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:59:31.000431 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:59:31.001653 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:59:31.003114 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:59:31.004556 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:59:31.005432 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:59:31.006147 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:59:31.006184 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:59:31.007144 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:59:31.011425 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:59:31.013772 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:59:31.018385 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:59:31.021257 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:59:31.024566 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:59:31.026130 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:59:31.036634 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 12:59:31.040477 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:59:31.046165 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:59:31.052488 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:59:31.065769 jq[1587]: false Dec 16 12:59:31.060650 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:59:31.071491 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:59:31.073365 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:59:31.073820 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:59:31.076086 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:59:31.082765 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:59:31.086724 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Refreshing passwd entry cache Dec 16 12:59:31.088168 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:59:31.089343 oslogin_cache_refresh[1589]: Refreshing passwd entry cache Dec 16 12:59:31.094003 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:59:31.094367 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:59:31.108072 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:59:31.109761 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Failure getting users, quitting Dec 16 12:59:31.109761 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 12:59:31.109761 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Refreshing group entry cache Dec 16 12:59:31.109761 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Failure getting groups, quitting Dec 16 12:59:31.109761 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 12:59:31.109446 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:59:31.108427 oslogin_cache_refresh[1589]: Failure getting users, quitting Dec 16 12:59:31.108445 oslogin_cache_refresh[1589]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 12:59:31.108486 oslogin_cache_refresh[1589]: Refreshing group entry cache Dec 16 12:59:31.108957 oslogin_cache_refresh[1589]: Failure getting groups, quitting Dec 16 12:59:31.108968 oslogin_cache_refresh[1589]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 12:59:31.111712 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 12:59:31.116377 extend-filesystems[1588]: Found /dev/sda6 Dec 16 12:59:31.111977 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 12:59:31.131915 jq[1602]: true Dec 16 12:59:31.138896 extend-filesystems[1588]: Found /dev/sda9 Dec 16 12:59:31.147135 update_engine[1601]: I20251216 12:59:31.146884 1601 main.cc:92] Flatcar Update Engine starting Dec 16 12:59:31.149231 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:59:31.149598 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:59:31.151250 extend-filesystems[1588]: Checking size of /dev/sda9 Dec 16 12:59:31.156964 tar[1607]: linux-amd64/LICENSE Dec 16 12:59:31.157168 tar[1607]: linux-amd64/helm Dec 16 12:59:31.168429 extend-filesystems[1588]: Resized partition /dev/sda9 Dec 16 12:59:31.170489 extend-filesystems[1639]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:59:31.179952 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Dec 16 12:59:31.186365 dbus-daemon[1585]: [system] SELinux support is enabled Dec 16 12:59:31.186582 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:59:31.189009 jq[1627]: true Dec 16 12:59:31.194286 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:59:31.204445 coreos-metadata[1584]: Dec 16 12:59:31.202 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 16 12:59:31.194366 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:59:31.195776 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:59:31.195796 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:59:31.224537 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:59:31.225796 update_engine[1601]: I20251216 12:59:31.225512 1601 update_check_scheduler.cc:74] Next update check in 8m38s Dec 16 12:59:31.254578 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:59:31.278658 systemd-logind[1599]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 12:59:31.278687 systemd-logind[1599]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 12:59:31.282628 systemd-logind[1599]: New seat seat0. Dec 16 12:59:31.283391 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:59:31.386754 bash[1659]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:59:31.385953 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:59:31.394958 systemd[1]: Starting sshkeys.service... Dec 16 12:59:31.421858 systemd-networkd[1517]: eth0: DHCPv4 address 172.239.193.244/24, gateway 172.239.193.1 acquired from 23.213.15.250 Dec 16 12:59:31.424148 systemd-timesyncd[1563]: Network configuration changed, trying to establish connection. Dec 16 12:59:31.434622 dbus-daemon[1585]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1517 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 12:59:31.444472 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 12:59:31.464085 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 12:59:31.467541 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 12:59:32.880059 systemd-resolved[1302]: Clock change detected. Flushing caches. Dec 16 12:59:32.880206 systemd-timesyncd[1563]: Contacted time server 162.244.81.139:123 (0.flatcar.pool.ntp.org). Dec 16 12:59:32.880266 systemd-timesyncd[1563]: Initial clock synchronization to Tue 2025-12-16 12:59:32.879391 UTC. Dec 16 12:59:32.888237 containerd[1618]: time="2025-12-16T12:59:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:59:32.892114 containerd[1618]: time="2025-12-16T12:59:32.891393398Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:59:32.894824 locksmithd[1644]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:59:32.912870 coreos-metadata[1668]: Dec 16 12:59:32.910 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 16 12:59:32.928163 containerd[1618]: time="2025-12-16T12:59:32.927596188Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.7µs" Dec 16 12:59:32.929212 containerd[1618]: time="2025-12-16T12:59:32.928971378Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:59:32.929212 containerd[1618]: time="2025-12-16T12:59:32.929039608Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:59:32.929212 containerd[1618]: time="2025-12-16T12:59:32.929084538Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:59:32.929826 containerd[1618]: time="2025-12-16T12:59:32.929533268Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:59:32.929878 containerd[1618]: time="2025-12-16T12:59:32.929553468Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:59:32.930331 containerd[1618]: time="2025-12-16T12:59:32.930148688Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:59:32.930331 containerd[1618]: time="2025-12-16T12:59:32.930164898Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:59:32.934147 containerd[1618]: time="2025-12-16T12:59:32.932096398Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:59:32.934147 containerd[1618]: time="2025-12-16T12:59:32.932115728Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:59:32.934147 containerd[1618]: time="2025-12-16T12:59:32.932127548Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:59:32.934147 containerd[1618]: time="2025-12-16T12:59:32.932135178Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:59:32.934147 containerd[1618]: time="2025-12-16T12:59:32.932305468Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:59:32.934147 containerd[1618]: time="2025-12-16T12:59:32.932323798Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:59:32.934147 containerd[1618]: time="2025-12-16T12:59:32.932416378Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:59:32.934147 containerd[1618]: time="2025-12-16T12:59:32.932612648Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:59:32.934147 containerd[1618]: time="2025-12-16T12:59:32.932641698Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:59:32.934147 containerd[1618]: time="2025-12-16T12:59:32.932650468Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:59:32.934147 containerd[1618]: time="2025-12-16T12:59:32.933914598Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:59:32.936844 containerd[1618]: time="2025-12-16T12:59:32.936788888Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:59:32.948811 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Dec 16 12:59:32.949282 containerd[1618]: time="2025-12-16T12:59:32.949069328Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:59:32.951318 extend-filesystems[1639]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 12:59:32.951318 extend-filesystems[1639]: old_desc_blocks = 1, new_desc_blocks = 10 Dec 16 12:59:32.951318 extend-filesystems[1639]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Dec 16 12:59:32.963440 extend-filesystems[1588]: Resized filesystem in /dev/sda9 Dec 16 12:59:32.965460 sshd_keygen[1636]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:59:32.954692 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956308898Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956342278Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956666498Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956680308Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956692288Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956703048Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956719288Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956728478Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956738138Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956756038Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956764928Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956774978Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956783188Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:59:32.965648 containerd[1618]: time="2025-12-16T12:59:32.956793178Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:59:32.954979 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.956892908Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.956910798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.956929648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.956940248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.956952148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.956960178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.956972068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.956980908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.956989828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.956998358Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.960333318Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.960372248Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.961276688Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.961466408Z" level=info msg="Start snapshots syncer" Dec 16 12:59:32.965988 containerd[1618]: time="2025-12-16T12:59:32.961491938Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:59:32.966736 containerd[1618]: time="2025-12-16T12:59:32.966268368Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:59:32.971250 containerd[1618]: time="2025-12-16T12:59:32.971215458Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:59:32.971404 containerd[1618]: time="2025-12-16T12:59:32.971375288Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:59:32.971733 containerd[1618]: time="2025-12-16T12:59:32.971700128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:59:32.971897 containerd[1618]: time="2025-12-16T12:59:32.971869348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:59:32.972996 containerd[1618]: time="2025-12-16T12:59:32.971895338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:59:32.972996 containerd[1618]: time="2025-12-16T12:59:32.972993888Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973287038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973505708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973522298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973535768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973548698Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973595948Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973612588Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973621908Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973631958Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973640668Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973655418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973667418Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973683818Z" level=info msg="runtime interface created" Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973689908Z" level=info msg="created NRI interface" Dec 16 12:59:32.974379 containerd[1618]: time="2025-12-16T12:59:32.973698208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:59:32.974637 containerd[1618]: time="2025-12-16T12:59:32.973718378Z" level=info msg="Connect containerd service" Dec 16 12:59:32.974637 containerd[1618]: time="2025-12-16T12:59:32.973742758Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:59:32.975395 containerd[1618]: time="2025-12-16T12:59:32.975359788Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:59:32.989689 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:59:32.997769 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:59:33.019753 coreos-metadata[1668]: Dec 16 12:59:33.019 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Dec 16 12:59:33.038906 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:59:33.039933 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:59:33.042602 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 12:59:33.045978 dbus-daemon[1585]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 12:59:33.047082 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:59:33.049893 dbus-daemon[1585]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.9' (uid=0 pid=1667 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 12:59:33.056258 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 12:59:33.100751 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:59:33.108487 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:59:33.113550 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 12:59:33.115694 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:59:33.153095 coreos-metadata[1668]: Dec 16 12:59:33.153 INFO Fetch successful Dec 16 12:59:33.181196 systemd-networkd[1517]: eth0: Gained IPv6LL Dec 16 12:59:33.186938 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:59:33.187633 polkitd[1701]: Started polkitd version 126 Dec 16 12:59:33.191202 polkitd[1701]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 12:59:33.191587 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:59:33.191456 polkitd[1701]: Loading rules from directory /run/polkit-1/rules.d Dec 16 12:59:33.191492 polkitd[1701]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:59:33.191693 polkitd[1701]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 12:59:33.191714 polkitd[1701]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:59:33.191747 polkitd[1701]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 12:59:33.192254 polkitd[1701]: Finished loading, compiling and executing 2 rules Dec 16 12:59:33.193462 dbus-daemon[1585]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 12:59:33.193899 polkitd[1701]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 12:59:33.195409 containerd[1618]: time="2025-12-16T12:59:33.195367808Z" level=info msg="Start subscribing containerd event" Dec 16 12:59:33.195464 containerd[1618]: time="2025-12-16T12:59:33.195434028Z" level=info msg="Start recovering state" Dec 16 12:59:33.196561 containerd[1618]: time="2025-12-16T12:59:33.196532628Z" level=info msg="Start event monitor" Dec 16 12:59:33.196601 containerd[1618]: time="2025-12-16T12:59:33.196562698Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:59:33.196601 containerd[1618]: time="2025-12-16T12:59:33.196571528Z" level=info msg="Start streaming server" Dec 16 12:59:33.196601 containerd[1618]: time="2025-12-16T12:59:33.196581658Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:59:33.196601 containerd[1618]: time="2025-12-16T12:59:33.196588738Z" level=info msg="runtime interface starting up..." Dec 16 12:59:33.196601 containerd[1618]: time="2025-12-16T12:59:33.196594578Z" level=info msg="starting plugins..." Dec 16 12:59:33.196702 containerd[1618]: time="2025-12-16T12:59:33.196609098Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:59:33.196803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:59:33.199914 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:59:33.202667 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 12:59:33.203653 update-ssh-keys[1711]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:59:33.204167 containerd[1618]: time="2025-12-16T12:59:33.203891758Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:59:33.204730 containerd[1618]: time="2025-12-16T12:59:33.204684948Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:59:33.206813 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 12:59:33.207271 containerd[1618]: time="2025-12-16T12:59:33.207223068Z" level=info msg="containerd successfully booted in 0.327880s" Dec 16 12:59:33.209091 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:59:33.217363 systemd[1]: Finished sshkeys.service. Dec 16 12:59:33.233034 systemd-hostnamed[1667]: Hostname set to <172-239-193-244> (transient) Dec 16 12:59:33.233757 systemd-resolved[1302]: System hostname changed to '172-239-193-244'. Dec 16 12:59:33.252427 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:59:33.357881 tar[1607]: linux-amd64/README.md Dec 16 12:59:33.376168 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:59:33.525104 coreos-metadata[1584]: Dec 16 12:59:33.524 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 16 12:59:33.614390 coreos-metadata[1584]: Dec 16 12:59:33.614 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Dec 16 12:59:33.796080 coreos-metadata[1584]: Dec 16 12:59:33.795 INFO Fetch successful Dec 16 12:59:33.796272 coreos-metadata[1584]: Dec 16 12:59:33.796 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Dec 16 12:59:34.052596 coreos-metadata[1584]: Dec 16 12:59:34.052 INFO Fetch successful Dec 16 12:59:34.102954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:59:34.107307 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:59:34.159430 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:59:34.161293 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:59:34.161970 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:59:34.163596 systemd[1]: Startup finished in 2.851s (kernel) + 5.190s (initrd) + 5.049s (userspace) = 13.092s. Dec 16 12:59:34.555491 kubelet[1751]: E1216 12:59:34.555439 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:59:34.558940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:59:34.559168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:59:34.559708 systemd[1]: kubelet.service: Consumed 797ms CPU time, 255.1M memory peak. Dec 16 12:59:37.255962 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:59:37.257141 systemd[1]: Started sshd@0-172.239.193.244:22-147.75.109.163:58976.service - OpenSSH per-connection server daemon (147.75.109.163:58976). Dec 16 12:59:37.558143 sshd[1776]: Accepted publickey for core from 147.75.109.163 port 58976 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 12:59:37.560385 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:37.567769 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:59:37.569629 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:59:37.576660 systemd-logind[1599]: New session 1 of user core. Dec 16 12:59:37.587407 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:59:37.590738 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:59:37.602690 (systemd)[1781]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:59:37.605616 systemd-logind[1599]: New session c1 of user core. Dec 16 12:59:37.746617 systemd[1781]: Queued start job for default target default.target. Dec 16 12:59:37.753637 systemd[1781]: Created slice app.slice - User Application Slice. Dec 16 12:59:37.753670 systemd[1781]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:59:37.753684 systemd[1781]: Reached target paths.target - Paths. Dec 16 12:59:37.753845 systemd[1781]: Reached target timers.target - Timers. Dec 16 12:59:37.755706 systemd[1781]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:59:37.758140 systemd[1781]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:59:37.767239 systemd[1781]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:59:37.768721 systemd[1781]: Reached target sockets.target - Sockets. Dec 16 12:59:37.769627 systemd[1781]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:59:37.769835 systemd[1781]: Reached target basic.target - Basic System. Dec 16 12:59:37.769962 systemd[1781]: Reached target default.target - Main User Target. Dec 16 12:59:37.770092 systemd[1781]: Startup finished in 157ms. Dec 16 12:59:37.770103 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:59:37.782155 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:59:37.944749 systemd[1]: Started sshd@1-172.239.193.244:22-147.75.109.163:58984.service - OpenSSH per-connection server daemon (147.75.109.163:58984). Dec 16 12:59:38.231773 sshd[1794]: Accepted publickey for core from 147.75.109.163 port 58984 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 12:59:38.233399 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:38.239068 systemd-logind[1599]: New session 2 of user core. Dec 16 12:59:38.248157 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:59:38.380323 sshd[1797]: Connection closed by 147.75.109.163 port 58984 Dec 16 12:59:38.380806 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:38.384911 systemd[1]: sshd@1-172.239.193.244:22-147.75.109.163:58984.service: Deactivated successfully. Dec 16 12:59:38.386679 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:59:38.387459 systemd-logind[1599]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:59:38.388799 systemd-logind[1599]: Removed session 2. Dec 16 12:59:38.438078 systemd[1]: Started sshd@2-172.239.193.244:22-147.75.109.163:59000.service - OpenSSH per-connection server daemon (147.75.109.163:59000). Dec 16 12:59:38.723330 sshd[1803]: Accepted publickey for core from 147.75.109.163 port 59000 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 12:59:38.724702 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:38.730444 systemd-logind[1599]: New session 3 of user core. Dec 16 12:59:38.740164 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:59:38.869606 sshd[1806]: Connection closed by 147.75.109.163 port 59000 Dec 16 12:59:38.870058 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:38.873788 systemd-logind[1599]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:59:38.874222 systemd[1]: sshd@2-172.239.193.244:22-147.75.109.163:59000.service: Deactivated successfully. Dec 16 12:59:38.876127 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:59:38.877722 systemd-logind[1599]: Removed session 3. Dec 16 12:59:38.931596 systemd[1]: Started sshd@3-172.239.193.244:22-147.75.109.163:59008.service - OpenSSH per-connection server daemon (147.75.109.163:59008). Dec 16 12:59:39.213273 sshd[1812]: Accepted publickey for core from 147.75.109.163 port 59008 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 12:59:39.215073 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:39.221068 systemd-logind[1599]: New session 4 of user core. Dec 16 12:59:39.227140 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:59:39.362820 sshd[1815]: Connection closed by 147.75.109.163 port 59008 Dec 16 12:59:39.363303 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:39.367832 systemd-logind[1599]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:59:39.368751 systemd[1]: sshd@3-172.239.193.244:22-147.75.109.163:59008.service: Deactivated successfully. Dec 16 12:59:39.370976 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:59:39.372529 systemd-logind[1599]: Removed session 4. Dec 16 12:59:39.430855 systemd[1]: Started sshd@4-172.239.193.244:22-147.75.109.163:59022.service - OpenSSH per-connection server daemon (147.75.109.163:59022). Dec 16 12:59:39.719878 sshd[1821]: Accepted publickey for core from 147.75.109.163 port 59022 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 12:59:39.721166 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:39.726711 systemd-logind[1599]: New session 5 of user core. Dec 16 12:59:39.732149 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:59:39.834133 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:59:39.834464 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:59:39.847060 sudo[1825]: pam_unix(sudo:session): session closed for user root Dec 16 12:59:39.896334 sshd[1824]: Connection closed by 147.75.109.163 port 59022 Dec 16 12:59:39.896912 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:39.901717 systemd[1]: sshd@4-172.239.193.244:22-147.75.109.163:59022.service: Deactivated successfully. Dec 16 12:59:39.904190 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:59:39.905795 systemd-logind[1599]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:59:39.906932 systemd-logind[1599]: Removed session 5. Dec 16 12:59:39.969516 systemd[1]: Started sshd@5-172.239.193.244:22-147.75.109.163:59034.service - OpenSSH per-connection server daemon (147.75.109.163:59034). Dec 16 12:59:40.263357 sshd[1831]: Accepted publickey for core from 147.75.109.163 port 59034 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 12:59:40.265155 sshd-session[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:40.271683 systemd-logind[1599]: New session 6 of user core. Dec 16 12:59:40.278143 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:59:40.371814 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:59:40.372221 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:59:40.378215 sudo[1836]: pam_unix(sudo:session): session closed for user root Dec 16 12:59:40.385877 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:59:40.386258 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:59:40.397443 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:59:40.433000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:59:40.435707 kernel: kauditd_printk_skb: 5 callbacks suppressed Dec 16 12:59:40.435735 kernel: audit: type=1305 audit(1765889980.433:230): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:59:40.439651 augenrules[1858]: No rules Dec 16 12:59:40.433000 audit[1858]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc86e5dcc0 a2=420 a3=0 items=0 ppid=1839 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:40.445652 kernel: audit: type=1300 audit(1765889980.433:230): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc86e5dcc0 a2=420 a3=0 items=0 ppid=1839 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:40.444478 sudo[1835]: pam_unix(sudo:session): session closed for user root Dec 16 12:59:40.441503 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:59:40.441782 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:59:40.448076 kernel: audit: type=1327 audit(1765889980.433:230): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:59:40.433000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:59:40.451060 kernel: audit: type=1130 audit(1765889980.440:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.458538 kernel: audit: type=1131 audit(1765889980.440:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.440000 audit[1835]: USER_END pid=1835 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.464661 kernel: audit: type=1106 audit(1765889980.440:233): pid=1835 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.440000 audit[1835]: CRED_DISP pid=1835 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.470929 kernel: audit: type=1104 audit(1765889980.440:234): pid=1835 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.494442 sshd[1834]: Connection closed by 147.75.109.163 port 59034 Dec 16 12:59:40.494824 sshd-session[1831]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:40.496000 audit[1831]: USER_END pid=1831 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:59:40.499653 systemd-logind[1599]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:59:40.500670 systemd[1]: sshd@5-172.239.193.244:22-147.75.109.163:59034.service: Deactivated successfully. Dec 16 12:59:40.503401 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:59:40.496000 audit[1831]: CRED_DISP pid=1831 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:59:40.506956 systemd-logind[1599]: Removed session 6. Dec 16 12:59:40.508038 kernel: audit: type=1106 audit(1765889980.496:235): pid=1831 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:59:40.508067 kernel: audit: type=1104 audit(1765889980.496:236): pid=1831 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:59:40.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.239.193.244:22-147.75.109.163:59034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.515173 kernel: audit: type=1131 audit(1765889980.499:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.239.193.244:22-147.75.109.163:59034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.239.193.244:22-147.75.109.163:59048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.561745 systemd[1]: Started sshd@6-172.239.193.244:22-147.75.109.163:59048.service - OpenSSH per-connection server daemon (147.75.109.163:59048). Dec 16 12:59:40.859000 audit[1867]: USER_ACCT pid=1867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:59:40.860319 sshd[1867]: Accepted publickey for core from 147.75.109.163 port 59048 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 12:59:40.860000 audit[1867]: CRED_ACQ pid=1867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:59:40.860000 audit[1867]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbad96600 a2=3 a3=0 items=0 ppid=1 pid=1867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:40.860000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:40.861889 sshd-session[1867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:40.867832 systemd-logind[1599]: New session 7 of user core. Dec 16 12:59:40.874170 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:59:40.875000 audit[1867]: USER_START pid=1867 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:59:40.877000 audit[1870]: CRED_ACQ pid=1870 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 12:59:40.967000 audit[1871]: USER_ACCT pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.968577 sudo[1871]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:59:40.967000 audit[1871]: CRED_REFR pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:59:40.968910 sudo[1871]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:59:40.970000 audit[1871]: USER_START pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:59:41.326272 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:59:41.337319 (dockerd)[1890]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:59:41.573943 dockerd[1890]: time="2025-12-16T12:59:41.573898008Z" level=info msg="Starting up" Dec 16 12:59:41.575214 dockerd[1890]: time="2025-12-16T12:59:41.575195738Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:59:41.587716 dockerd[1890]: time="2025-12-16T12:59:41.587608108Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:59:41.599932 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport758666523-merged.mount: Deactivated successfully. Dec 16 12:59:41.632730 dockerd[1890]: time="2025-12-16T12:59:41.632682618Z" level=info msg="Loading containers: start." Dec 16 12:59:41.645038 kernel: Initializing XFRM netlink socket Dec 16 12:59:41.709000 audit[1938]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.709000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe86b06910 a2=0 a3=0 items=0 ppid=1890 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:59:41.713000 audit[1940]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.713000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffff8002750 a2=0 a3=0 items=0 ppid=1890 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.713000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:59:41.715000 audit[1942]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.715000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd938f1c40 a2=0 a3=0 items=0 ppid=1890 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:59:41.718000 audit[1944]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.718000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda9d3b460 a2=0 a3=0 items=0 ppid=1890 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:59:41.720000 audit[1946]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.720000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffebe20e9f0 a2=0 a3=0 items=0 ppid=1890 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:59:41.723000 audit[1948]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.723000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffec02e3e70 a2=0 a3=0 items=0 ppid=1890 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.723000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:59:41.725000 audit[1950]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.725000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffed548a230 a2=0 a3=0 items=0 ppid=1890 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.725000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:59:41.728000 audit[1952]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.728000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd0bf2e3e0 a2=0 a3=0 items=0 ppid=1890 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.728000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:59:41.754000 audit[1955]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.754000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffd6888b960 a2=0 a3=0 items=0 ppid=1890 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.754000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:59:41.756000 audit[1957]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.756000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff345e1eb0 a2=0 a3=0 items=0 ppid=1890 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.756000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:59:41.759000 audit[1959]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.759000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe60ff6e20 a2=0 a3=0 items=0 ppid=1890 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.759000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:59:41.765000 audit[1961]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.765000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc5d54dfa0 a2=0 a3=0 items=0 ppid=1890 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.765000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:59:41.767000 audit[1963]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.767000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc14cfda50 a2=0 a3=0 items=0 ppid=1890 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.767000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:59:41.812000 audit[1993]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.812000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd0772dae0 a2=0 a3=0 items=0 ppid=1890 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.812000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:59:41.814000 audit[1995]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.814000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd19f8df30 a2=0 a3=0 items=0 ppid=1890 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:59:41.817000 audit[1997]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.817000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd983d80d0 a2=0 a3=0 items=0 ppid=1890 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:59:41.819000 audit[1999]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.819000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe63bfdf00 a2=0 a3=0 items=0 ppid=1890 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.819000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:59:41.821000 audit[2001]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.821000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe8a7b3940 a2=0 a3=0 items=0 ppid=1890 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:59:41.824000 audit[2003]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.824000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc64c4cd80 a2=0 a3=0 items=0 ppid=1890 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:59:41.826000 audit[2005]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.826000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc25039240 a2=0 a3=0 items=0 ppid=1890 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.826000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:59:41.829000 audit[2007]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.829000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcedaf9b70 a2=0 a3=0 items=0 ppid=1890 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.829000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:59:41.831000 audit[2009]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.831000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff5a73fce0 a2=0 a3=0 items=0 ppid=1890 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.831000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:59:41.834000 audit[2011]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.834000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdbe952b40 a2=0 a3=0 items=0 ppid=1890 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:59:41.837000 audit[2013]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.837000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc0e6d6b80 a2=0 a3=0 items=0 ppid=1890 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.837000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:59:41.840000 audit[2015]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.840000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdf6270500 a2=0 a3=0 items=0 ppid=1890 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.840000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:59:41.843000 audit[2017]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.843000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe57291170 a2=0 a3=0 items=0 ppid=1890 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.843000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:59:41.849000 audit[2022]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.849000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed992ad60 a2=0 a3=0 items=0 ppid=1890 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:59:41.852000 audit[2024]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.852000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff5ed2b610 a2=0 a3=0 items=0 ppid=1890 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.852000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:59:41.854000 audit[2026]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.854000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcdaf330d0 a2=0 a3=0 items=0 ppid=1890 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.854000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:59:41.856000 audit[2028]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.856000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe5d7ebbb0 a2=0 a3=0 items=0 ppid=1890 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.856000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:59:41.859000 audit[2030]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.859000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffca608f290 a2=0 a3=0 items=0 ppid=1890 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.859000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:59:41.862000 audit[2032]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:41.862000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc3c0715c0 a2=0 a3=0 items=0 ppid=1890 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.862000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:59:41.883000 audit[2036]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.883000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffea2f12a50 a2=0 a3=0 items=0 ppid=1890 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.883000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:59:41.888000 audit[2040]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.888000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcb463f510 a2=0 a3=0 items=0 ppid=1890 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.888000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:59:41.899000 audit[2048]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.899000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fffb62b8fb0 a2=0 a3=0 items=0 ppid=1890 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.899000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:59:41.911000 audit[2054]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.911000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe13d33280 a2=0 a3=0 items=0 ppid=1890 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.911000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:59:41.913000 audit[2056]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.913000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffdd1151670 a2=0 a3=0 items=0 ppid=1890 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.913000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:59:41.916000 audit[2058]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.916000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc6b3ab670 a2=0 a3=0 items=0 ppid=1890 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.916000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:59:41.920000 audit[2060]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.920000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffafd3ff60 a2=0 a3=0 items=0 ppid=1890 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:59:41.923000 audit[2062]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:41.923000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdbae037a0 a2=0 a3=0 items=0 ppid=1890 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:41.923000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:59:41.925136 systemd-networkd[1517]: docker0: Link UP Dec 16 12:59:41.928443 dockerd[1890]: time="2025-12-16T12:59:41.928414868Z" level=info msg="Loading containers: done." Dec 16 12:59:41.946083 dockerd[1890]: time="2025-12-16T12:59:41.946050188Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:59:41.946212 dockerd[1890]: time="2025-12-16T12:59:41.946124438Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:59:41.946212 dockerd[1890]: time="2025-12-16T12:59:41.946197328Z" level=info msg="Initializing buildkit" Dec 16 12:59:41.966962 dockerd[1890]: time="2025-12-16T12:59:41.966817568Z" level=info msg="Completed buildkit initialization" Dec 16 12:59:41.974302 dockerd[1890]: time="2025-12-16T12:59:41.974253028Z" level=info msg="Daemon has completed initialization" Dec 16 12:59:41.974860 dockerd[1890]: time="2025-12-16T12:59:41.974638778Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:59:41.974970 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:59:41.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:42.512478 containerd[1618]: time="2025-12-16T12:59:42.512425118Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 12:59:43.244517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount110164848.mount: Deactivated successfully. Dec 16 12:59:43.961166 containerd[1618]: time="2025-12-16T12:59:43.961096028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:43.962217 containerd[1618]: time="2025-12-16T12:59:43.962040858Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25399329" Dec 16 12:59:43.962786 containerd[1618]: time="2025-12-16T12:59:43.962756178Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:43.964866 containerd[1618]: time="2025-12-16T12:59:43.964838068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:43.965711 containerd[1618]: time="2025-12-16T12:59:43.965675168Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.4532149s" Dec 16 12:59:43.965756 containerd[1618]: time="2025-12-16T12:59:43.965711718Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 12:59:43.966675 containerd[1618]: time="2025-12-16T12:59:43.966645768Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 12:59:44.809600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:59:44.811940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:59:45.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:45.016198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:59:45.023537 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:59:45.080651 kubelet[2171]: E1216 12:59:45.080258 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:59:45.087239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:59:45.087622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:59:45.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:59:45.088464 systemd[1]: kubelet.service: Consumed 206ms CPU time, 110.7M memory peak. Dec 16 12:59:45.320892 containerd[1618]: time="2025-12-16T12:59:45.320845468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:45.322130 containerd[1618]: time="2025-12-16T12:59:45.322085348Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Dec 16 12:59:45.322714 containerd[1618]: time="2025-12-16T12:59:45.322655098Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:45.324557 containerd[1618]: time="2025-12-16T12:59:45.324537808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:45.325736 containerd[1618]: time="2025-12-16T12:59:45.325411568Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.35873842s" Dec 16 12:59:45.325736 containerd[1618]: time="2025-12-16T12:59:45.325446108Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 12:59:45.326119 containerd[1618]: time="2025-12-16T12:59:45.326084508Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 12:59:46.342074 containerd[1618]: time="2025-12-16T12:59:46.341973628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:46.343079 containerd[1618]: time="2025-12-16T12:59:46.343048248Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Dec 16 12:59:46.343759 containerd[1618]: time="2025-12-16T12:59:46.343729828Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:46.346287 containerd[1618]: time="2025-12-16T12:59:46.346257678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:46.347089 containerd[1618]: time="2025-12-16T12:59:46.347068728Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.02093239s" Dec 16 12:59:46.347163 containerd[1618]: time="2025-12-16T12:59:46.347149138Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 12:59:46.348069 containerd[1618]: time="2025-12-16T12:59:46.348049498Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 12:59:47.462903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417359405.mount: Deactivated successfully. Dec 16 12:59:47.711112 containerd[1618]: time="2025-12-16T12:59:47.711061738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:47.712114 containerd[1618]: time="2025-12-16T12:59:47.712072288Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Dec 16 12:59:47.712465 containerd[1618]: time="2025-12-16T12:59:47.712435788Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:47.715236 containerd[1618]: time="2025-12-16T12:59:47.714280988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:47.715236 containerd[1618]: time="2025-12-16T12:59:47.714827438Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.36667172s" Dec 16 12:59:47.715236 containerd[1618]: time="2025-12-16T12:59:47.714850828Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 12:59:47.715693 containerd[1618]: time="2025-12-16T12:59:47.715667228Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 12:59:48.552068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753810164.mount: Deactivated successfully. Dec 16 12:59:49.145454 containerd[1618]: time="2025-12-16T12:59:49.145394688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:49.146372 containerd[1618]: time="2025-12-16T12:59:49.146325908Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568837" Dec 16 12:59:49.148043 containerd[1618]: time="2025-12-16T12:59:49.146887998Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:49.149235 containerd[1618]: time="2025-12-16T12:59:49.149196668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:49.150792 containerd[1618]: time="2025-12-16T12:59:49.150217268Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.43452073s" Dec 16 12:59:49.150792 containerd[1618]: time="2025-12-16T12:59:49.150255628Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 12:59:49.151064 containerd[1618]: time="2025-12-16T12:59:49.151043978Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 12:59:49.771712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056527320.mount: Deactivated successfully. Dec 16 12:59:49.776367 containerd[1618]: time="2025-12-16T12:59:49.776314378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:49.777271 containerd[1618]: time="2025-12-16T12:59:49.777173528Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 16 12:59:49.777800 containerd[1618]: time="2025-12-16T12:59:49.777774758Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:49.780159 containerd[1618]: time="2025-12-16T12:59:49.780122488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:49.781220 containerd[1618]: time="2025-12-16T12:59:49.781188648Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 630.02255ms" Dec 16 12:59:49.781266 containerd[1618]: time="2025-12-16T12:59:49.781219258Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 12:59:49.781693 containerd[1618]: time="2025-12-16T12:59:49.781662938Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 12:59:50.417392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1969857991.mount: Deactivated successfully. Dec 16 12:59:52.364985 containerd[1618]: time="2025-12-16T12:59:52.364892278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:52.366062 containerd[1618]: time="2025-12-16T12:59:52.365814738Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Dec 16 12:59:52.366705 containerd[1618]: time="2025-12-16T12:59:52.366648088Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:52.368884 containerd[1618]: time="2025-12-16T12:59:52.368846408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:59:52.370220 containerd[1618]: time="2025-12-16T12:59:52.369767978Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.5880783s" Dec 16 12:59:52.370220 containerd[1618]: time="2025-12-16T12:59:52.369794938Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 12:59:54.813102 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:59:54.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:54.815943 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 12:59:54.816056 kernel: audit: type=1130 audit(1765889994.812:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:54.814694 systemd[1]: kubelet.service: Consumed 206ms CPU time, 110.7M memory peak. Dec 16 12:59:54.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:54.825216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:59:54.827106 kernel: audit: type=1131 audit(1765889994.813:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:54.865803 systemd[1]: Reload requested from client PID 2327 ('systemctl') (unit session-7.scope)... Dec 16 12:59:54.865824 systemd[1]: Reloading... Dec 16 12:59:55.014044 zram_generator::config[2376]: No configuration found. Dec 16 12:59:55.235351 systemd[1]: Reloading finished in 368 ms. Dec 16 12:59:55.268000 audit: BPF prog-id=67 op=LOAD Dec 16 12:59:55.272044 kernel: audit: type=1334 audit(1765889995.268:292): prog-id=67 op=LOAD Dec 16 12:59:55.276000 audit: BPF prog-id=49 op=UNLOAD Dec 16 12:59:55.280033 kernel: audit: type=1334 audit(1765889995.276:293): prog-id=49 op=UNLOAD Dec 16 12:59:55.276000 audit: BPF prog-id=68 op=LOAD Dec 16 12:59:55.284053 kernel: audit: type=1334 audit(1765889995.276:294): prog-id=68 op=LOAD Dec 16 12:59:55.286036 kernel: audit: type=1334 audit(1765889995.276:295): prog-id=69 op=LOAD Dec 16 12:59:55.276000 audit: BPF prog-id=69 op=LOAD Dec 16 12:59:55.276000 audit: BPF prog-id=50 op=UNLOAD Dec 16 12:59:55.290246 kernel: audit: type=1334 audit(1765889995.276:296): prog-id=50 op=UNLOAD Dec 16 12:59:55.290289 kernel: audit: type=1334 audit(1765889995.276:297): prog-id=51 op=UNLOAD Dec 16 12:59:55.276000 audit: BPF prog-id=51 op=UNLOAD Dec 16 12:59:55.292676 kernel: audit: type=1334 audit(1765889995.277:298): prog-id=70 op=LOAD Dec 16 12:59:55.277000 audit: BPF prog-id=70 op=LOAD Dec 16 12:59:55.295094 kernel: audit: type=1334 audit(1765889995.277:299): prog-id=46 op=UNLOAD Dec 16 12:59:55.277000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:59:55.277000 audit: BPF prog-id=71 op=LOAD Dec 16 12:59:55.277000 audit: BPF prog-id=72 op=LOAD Dec 16 12:59:55.277000 audit: BPF prog-id=47 op=UNLOAD Dec 16 12:59:55.277000 audit: BPF prog-id=48 op=UNLOAD Dec 16 12:59:55.279000 audit: BPF prog-id=73 op=LOAD Dec 16 12:59:55.279000 audit: BPF prog-id=55 op=UNLOAD Dec 16 12:59:55.280000 audit: BPF prog-id=74 op=LOAD Dec 16 12:59:55.280000 audit: BPF prog-id=58 op=UNLOAD Dec 16 12:59:55.281000 audit: BPF prog-id=75 op=LOAD Dec 16 12:59:55.281000 audit: BPF prog-id=66 op=UNLOAD Dec 16 12:59:55.283000 audit: BPF prog-id=76 op=LOAD Dec 16 12:59:55.283000 audit: BPF prog-id=52 op=UNLOAD Dec 16 12:59:55.283000 audit: BPF prog-id=77 op=LOAD Dec 16 12:59:55.283000 audit: BPF prog-id=78 op=LOAD Dec 16 12:59:55.283000 audit: BPF prog-id=53 op=UNLOAD Dec 16 12:59:55.283000 audit: BPF prog-id=54 op=UNLOAD Dec 16 12:59:55.284000 audit: BPF prog-id=79 op=LOAD Dec 16 12:59:55.284000 audit: BPF prog-id=63 op=UNLOAD Dec 16 12:59:55.284000 audit: BPF prog-id=80 op=LOAD Dec 16 12:59:55.284000 audit: BPF prog-id=81 op=LOAD Dec 16 12:59:55.284000 audit: BPF prog-id=64 op=UNLOAD Dec 16 12:59:55.284000 audit: BPF prog-id=65 op=UNLOAD Dec 16 12:59:55.285000 audit: BPF prog-id=82 op=LOAD Dec 16 12:59:55.285000 audit: BPF prog-id=59 op=UNLOAD Dec 16 12:59:55.286000 audit: BPF prog-id=83 op=LOAD Dec 16 12:59:55.286000 audit: BPF prog-id=60 op=UNLOAD Dec 16 12:59:55.287000 audit: BPF prog-id=84 op=LOAD Dec 16 12:59:55.287000 audit: BPF prog-id=85 op=LOAD Dec 16 12:59:55.287000 audit: BPF prog-id=61 op=UNLOAD Dec 16 12:59:55.287000 audit: BPF prog-id=62 op=UNLOAD Dec 16 12:59:55.288000 audit: BPF prog-id=86 op=LOAD Dec 16 12:59:55.288000 audit: BPF prog-id=87 op=LOAD Dec 16 12:59:55.288000 audit: BPF prog-id=56 op=UNLOAD Dec 16 12:59:55.288000 audit: BPF prog-id=57 op=UNLOAD Dec 16 12:59:55.289000 audit: BPF prog-id=88 op=LOAD Dec 16 12:59:55.289000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:59:55.289000 audit: BPF prog-id=89 op=LOAD Dec 16 12:59:55.289000 audit: BPF prog-id=90 op=LOAD Dec 16 12:59:55.289000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:59:55.289000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:59:55.307704 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:59:55.307810 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:59:55.308166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:59:55.308217 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.3M memory peak. Dec 16 12:59:55.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:59:55.309755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:59:55.503709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:59:55.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:55.511296 (kubelet)[2428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:59:55.552120 kubelet[2428]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:59:55.552504 kubelet[2428]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:59:55.554061 kubelet[2428]: I1216 12:59:55.553655 2428 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:59:56.267032 kubelet[2428]: I1216 12:59:56.266271 2428 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:59:56.267032 kubelet[2428]: I1216 12:59:56.266302 2428 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:59:56.267032 kubelet[2428]: I1216 12:59:56.266329 2428 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:59:56.267032 kubelet[2428]: I1216 12:59:56.266335 2428 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:59:56.267032 kubelet[2428]: I1216 12:59:56.266662 2428 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:59:56.271894 kubelet[2428]: E1216 12:59:56.271861 2428 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.193.244:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.193.244:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:59:56.272226 kubelet[2428]: I1216 12:59:56.272198 2428 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:59:56.276045 kubelet[2428]: I1216 12:59:56.276025 2428 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:59:56.280701 kubelet[2428]: I1216 12:59:56.280193 2428 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:59:56.281964 kubelet[2428]: I1216 12:59:56.281943 2428 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:59:56.282184 kubelet[2428]: I1216 12:59:56.282050 2428 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-193-244","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:59:56.282319 kubelet[2428]: I1216 12:59:56.282307 2428 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:59:56.282375 kubelet[2428]: I1216 12:59:56.282366 2428 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:59:56.282487 kubelet[2428]: I1216 12:59:56.282476 2428 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:59:56.284177 kubelet[2428]: I1216 12:59:56.284163 2428 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:59:56.284404 kubelet[2428]: I1216 12:59:56.284392 2428 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:59:56.284673 kubelet[2428]: I1216 12:59:56.284663 2428 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:59:56.284730 kubelet[2428]: I1216 12:59:56.284721 2428 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:59:56.284798 kubelet[2428]: I1216 12:59:56.284789 2428 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:59:56.288034 kubelet[2428]: E1216 12:59:56.287994 2428 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.193.244:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-193-244&limit=500&resourceVersion=0\": dial tcp 172.239.193.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:59:56.288192 kubelet[2428]: E1216 12:59:56.288154 2428 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.193.244:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.193.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:59:56.288450 kubelet[2428]: I1216 12:59:56.288436 2428 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:59:56.289051 kubelet[2428]: I1216 12:59:56.289038 2428 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:59:56.289122 kubelet[2428]: I1216 12:59:56.289113 2428 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:59:56.289204 kubelet[2428]: W1216 12:59:56.289194 2428 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:59:56.294192 kubelet[2428]: I1216 12:59:56.294179 2428 server.go:1262] "Started kubelet" Dec 16 12:59:56.294488 kubelet[2428]: I1216 12:59:56.294459 2428 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:59:56.297202 kubelet[2428]: I1216 12:59:56.297187 2428 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:59:56.298338 kubelet[2428]: I1216 12:59:56.297830 2428 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:59:56.298410 kubelet[2428]: I1216 12:59:56.298397 2428 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:59:56.298724 kubelet[2428]: I1216 12:59:56.298711 2428 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:59:56.299682 kubelet[2428]: I1216 12:59:56.299668 2428 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:59:56.301685 kubelet[2428]: E1216 12:59:56.300606 2428 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.193.244:6443/api/v1/namespaces/default/events\": dial tcp 172.239.193.244:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-193-244.1881b398758747ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-193-244,UID:172-239-193-244,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-193-244,},FirstTimestamp:2025-12-16 12:59:56.294146028 +0000 UTC m=+0.778258821,LastTimestamp:2025-12-16 12:59:56.294146028 +0000 UTC m=+0.778258821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-193-244,}" Dec 16 12:59:56.302061 kubelet[2428]: I1216 12:59:56.301836 2428 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:59:56.305530 kubelet[2428]: E1216 12:59:56.305503 2428 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:59:56.305609 kubelet[2428]: E1216 12:59:56.305589 2428 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-244\" not found" Dec 16 12:59:56.305646 kubelet[2428]: I1216 12:59:56.305631 2428 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:59:56.305767 kubelet[2428]: I1216 12:59:56.305746 2428 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:59:56.305797 kubelet[2428]: I1216 12:59:56.305789 2428 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:59:56.306322 kubelet[2428]: I1216 12:59:56.306303 2428 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:59:56.306522 kubelet[2428]: I1216 12:59:56.306363 2428 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:59:56.307154 kubelet[2428]: E1216 12:59:56.306560 2428 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.193.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.193.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:59:56.307289 kubelet[2428]: I1216 12:59:56.307250 2428 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:59:56.306000 audit[2444]: NETFILTER_CFG table=mangle:42 family=10 entries=2 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:56.306000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd01dda600 a2=0 a3=0 items=0 ppid=2428 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.306000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:59:56.307893 kubelet[2428]: I1216 12:59:56.307856 2428 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:59:56.307000 audit[2445]: NETFILTER_CFG table=mangle:43 family=2 entries=2 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:56.307000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe986081f0 a2=0 a3=0 items=0 ppid=2428 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.307000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:59:56.309000 audit[2446]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:56.309000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5d6f8380 a2=0 a3=0 items=0 ppid=2428 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.309000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:59:56.311000 audit[2448]: NETFILTER_CFG table=mangle:45 family=10 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:56.311000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf36b4860 a2=0 a3=0 items=0 ppid=2428 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.311000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:59:56.313000 audit[2449]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:56.313000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa02bca20 a2=0 a3=0 items=0 ppid=2428 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.313000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:59:56.314000 audit[2450]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:59:56.314000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffaf392ea0 a2=0 a3=0 items=0 ppid=2428 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:59:56.315000 audit[2451]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:56.315000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff05af13f0 a2=0 a3=0 items=0 ppid=2428 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.315000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:59:56.317000 audit[2453]: NETFILTER_CFG table=filter:49 family=2 entries=2 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:56.317000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe60d232f0 a2=0 a3=0 items=0 ppid=2428 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:59:56.325000 audit[2456]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:56.325000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd26e898f0 a2=0 a3=0 items=0 ppid=2428 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.325000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 16 12:59:56.328174 kubelet[2428]: E1216 12:59:56.326391 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-244?timeout=10s\": dial tcp 172.239.193.244:6443: connect: connection refused" interval="200ms" Dec 16 12:59:56.329754 kubelet[2428]: I1216 12:59:56.329728 2428 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:59:56.329754 kubelet[2428]: I1216 12:59:56.329748 2428 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:59:56.329818 kubelet[2428]: I1216 12:59:56.329778 2428 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:59:56.329841 kubelet[2428]: E1216 12:59:56.329814 2428 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:59:56.330000 audit[2460]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:56.330000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff31046b0 a2=0 a3=0 items=0 ppid=2428 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.330000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:59:56.332000 audit[2461]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:56.332000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff87891740 a2=0 a3=0 items=0 ppid=2428 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:59:56.334265 kubelet[2428]: E1216 12:59:56.334239 2428 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.193.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.193.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:59:56.335382 kubelet[2428]: I1216 12:59:56.335367 2428 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:59:56.335382 kubelet[2428]: I1216 12:59:56.335379 2428 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:59:56.335461 kubelet[2428]: I1216 12:59:56.335392 2428 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:59:56.335000 audit[2463]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:59:56.335000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc341c3280 a2=0 a3=0 items=0 ppid=2428 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:56.336797 kubelet[2428]: I1216 12:59:56.336759 2428 policy_none.go:49] "None policy: Start" Dec 16 12:59:56.336826 kubelet[2428]: I1216 12:59:56.336798 2428 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:59:56.336826 kubelet[2428]: I1216 12:59:56.336810 2428 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:59:56.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:59:56.337814 kubelet[2428]: I1216 12:59:56.337791 2428 policy_none.go:47] "Start" Dec 16 12:59:56.343426 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:59:56.355910 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:59:56.359370 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:59:56.369183 kubelet[2428]: E1216 12:59:56.369162 2428 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:59:56.369625 kubelet[2428]: I1216 12:59:56.369313 2428 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:59:56.369625 kubelet[2428]: I1216 12:59:56.369323 2428 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:59:56.369625 kubelet[2428]: I1216 12:59:56.369460 2428 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:59:56.371130 kubelet[2428]: E1216 12:59:56.371090 2428 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:59:56.371534 kubelet[2428]: E1216 12:59:56.371521 2428 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-193-244\" not found" Dec 16 12:59:56.439169 systemd[1]: Created slice kubepods-burstable-pod51e677e3d07b0d26f5507864b6f019d4.slice - libcontainer container kubepods-burstable-pod51e677e3d07b0d26f5507864b6f019d4.slice. Dec 16 12:59:56.449779 kubelet[2428]: E1216 12:59:56.449749 2428 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-244\" not found" node="172-239-193-244" Dec 16 12:59:56.454229 systemd[1]: Created slice kubepods-burstable-pod24ddbc6ec46e114d80dc0c6d1d7ec496.slice - libcontainer container kubepods-burstable-pod24ddbc6ec46e114d80dc0c6d1d7ec496.slice. Dec 16 12:59:56.465235 kubelet[2428]: E1216 12:59:56.465214 2428 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-244\" not found" node="172-239-193-244" Dec 16 12:59:56.470381 systemd[1]: Created slice kubepods-burstable-pod4917da192cf85b32497da7945a35ade2.slice - libcontainer container kubepods-burstable-pod4917da192cf85b32497da7945a35ade2.slice. Dec 16 12:59:56.472577 kubelet[2428]: I1216 12:59:56.472474 2428 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-244" Dec 16 12:59:56.472940 kubelet[2428]: E1216 12:59:56.472922 2428 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-244\" not found" node="172-239-193-244" Dec 16 12:59:56.473057 kubelet[2428]: E1216 12:59:56.472983 2428 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.193.244:6443/api/v1/nodes\": dial tcp 172.239.193.244:6443: connect: connection refused" node="172-239-193-244" Dec 16 12:59:56.507322 kubelet[2428]: I1216 12:59:56.507275 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24ddbc6ec46e114d80dc0c6d1d7ec496-ca-certs\") pod \"kube-controller-manager-172-239-193-244\" (UID: \"24ddbc6ec46e114d80dc0c6d1d7ec496\") " pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 12:59:56.507322 kubelet[2428]: I1216 12:59:56.507297 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/24ddbc6ec46e114d80dc0c6d1d7ec496-flexvolume-dir\") pod \"kube-controller-manager-172-239-193-244\" (UID: \"24ddbc6ec46e114d80dc0c6d1d7ec496\") " pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 12:59:56.507513 kubelet[2428]: I1216 12:59:56.507325 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24ddbc6ec46e114d80dc0c6d1d7ec496-k8s-certs\") pod \"kube-controller-manager-172-239-193-244\" (UID: \"24ddbc6ec46e114d80dc0c6d1d7ec496\") " pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 12:59:56.507513 kubelet[2428]: I1216 12:59:56.507353 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24ddbc6ec46e114d80dc0c6d1d7ec496-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-193-244\" (UID: \"24ddbc6ec46e114d80dc0c6d1d7ec496\") " pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 12:59:56.507513 kubelet[2428]: I1216 12:59:56.507401 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4917da192cf85b32497da7945a35ade2-kubeconfig\") pod \"kube-scheduler-172-239-193-244\" (UID: \"4917da192cf85b32497da7945a35ade2\") " pod="kube-system/kube-scheduler-172-239-193-244" Dec 16 12:59:56.507513 kubelet[2428]: I1216 12:59:56.507417 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51e677e3d07b0d26f5507864b6f019d4-ca-certs\") pod \"kube-apiserver-172-239-193-244\" (UID: \"51e677e3d07b0d26f5507864b6f019d4\") " pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 12:59:56.507513 kubelet[2428]: I1216 12:59:56.507428 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51e677e3d07b0d26f5507864b6f019d4-k8s-certs\") pod \"kube-apiserver-172-239-193-244\" (UID: \"51e677e3d07b0d26f5507864b6f019d4\") " pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 12:59:56.507626 kubelet[2428]: I1216 12:59:56.507441 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51e677e3d07b0d26f5507864b6f019d4-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-193-244\" (UID: \"51e677e3d07b0d26f5507864b6f019d4\") " pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 12:59:56.507626 kubelet[2428]: I1216 12:59:56.507454 2428 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24ddbc6ec46e114d80dc0c6d1d7ec496-kubeconfig\") pod \"kube-controller-manager-172-239-193-244\" (UID: \"24ddbc6ec46e114d80dc0c6d1d7ec496\") " pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 12:59:56.526957 kubelet[2428]: E1216 12:59:56.526861 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-244?timeout=10s\": dial tcp 172.239.193.244:6443: connect: connection refused" interval="400ms" Dec 16 12:59:56.675106 kubelet[2428]: I1216 12:59:56.674936 2428 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-244" Dec 16 12:59:56.675368 kubelet[2428]: E1216 12:59:56.675144 2428 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.193.244:6443/api/v1/nodes\": dial tcp 172.239.193.244:6443: connect: connection refused" node="172-239-193-244" Dec 16 12:59:56.752742 kubelet[2428]: E1216 12:59:56.752519 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 12:59:56.753263 containerd[1618]: time="2025-12-16T12:59:56.753233308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-193-244,Uid:51e677e3d07b0d26f5507864b6f019d4,Namespace:kube-system,Attempt:0,}" Dec 16 12:59:56.766816 kubelet[2428]: E1216 12:59:56.766796 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 12:59:56.767157 containerd[1618]: time="2025-12-16T12:59:56.767042288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-193-244,Uid:24ddbc6ec46e114d80dc0c6d1d7ec496,Namespace:kube-system,Attempt:0,}" Dec 16 12:59:56.774184 kubelet[2428]: E1216 12:59:56.774169 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 12:59:56.774456 containerd[1618]: time="2025-12-16T12:59:56.774427608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-193-244,Uid:4917da192cf85b32497da7945a35ade2,Namespace:kube-system,Attempt:0,}" Dec 16 12:59:56.927390 kubelet[2428]: E1216 12:59:56.927316 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-244?timeout=10s\": dial tcp 172.239.193.244:6443: connect: connection refused" interval="800ms" Dec 16 12:59:57.077257 kubelet[2428]: I1216 12:59:57.077231 2428 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-244" Dec 16 12:59:57.077432 kubelet[2428]: E1216 12:59:57.077414 2428 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.193.244:6443/api/v1/nodes\": dial tcp 172.239.193.244:6443: connect: connection refused" node="172-239-193-244" Dec 16 12:59:57.133344 kubelet[2428]: E1216 12:59:57.133289 2428 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.193.244:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.193.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:59:57.256397 kubelet[2428]: E1216 12:59:57.256332 2428 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.193.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.193.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:59:57.312226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119309846.mount: Deactivated successfully. Dec 16 12:59:57.316143 containerd[1618]: time="2025-12-16T12:59:57.316116228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:59:57.317362 containerd[1618]: time="2025-12-16T12:59:57.317338358Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:59:57.318261 containerd[1618]: time="2025-12-16T12:59:57.318242168Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:59:57.318678 containerd[1618]: time="2025-12-16T12:59:57.318616328Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:59:57.319478 containerd[1618]: time="2025-12-16T12:59:57.319454788Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:59:57.321424 containerd[1618]: time="2025-12-16T12:59:57.321396608Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:59:57.327407 containerd[1618]: time="2025-12-16T12:59:57.327374708Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:59:57.328577 containerd[1618]: time="2025-12-16T12:59:57.328385348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:59:57.329909 containerd[1618]: time="2025-12-16T12:59:57.329882108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 561.51753ms" Dec 16 12:59:57.331484 containerd[1618]: time="2025-12-16T12:59:57.331462518Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 555.86536ms" Dec 16 12:59:57.333056 containerd[1618]: time="2025-12-16T12:59:57.333025738Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 577.87905ms" Dec 16 12:59:57.354528 kubelet[2428]: E1216 12:59:57.354449 2428 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.193.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.193.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:59:57.357221 containerd[1618]: time="2025-12-16T12:59:57.357165398Z" level=info msg="connecting to shim f61f0efdc112df795ea3414738b2ed8766cc7650a88cce4303176530454240b0" address="unix:///run/containerd/s/e7ec4e39946f728a70faed4d3b76c92822698ddf072832e5d3120942c21f7182" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:59:57.360122 containerd[1618]: time="2025-12-16T12:59:57.360101368Z" level=info msg="connecting to shim 11736fdb7357031cceea2b57f12d9fcc6395db3deeddedc06562086dd9b345bd" address="unix:///run/containerd/s/8240c6207912e6f67c9109acf4040300f3be7f6893ebe5253485ef830eb0bc01" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:59:57.369459 containerd[1618]: time="2025-12-16T12:59:57.369148488Z" level=info msg="connecting to shim cf0aea76ab0d043e68a3eacf4c817075afbc1e3abd34a51d38285b05255bf5fb" address="unix:///run/containerd/s/20cdc986fe5908b9f9f878ffde8ede7622acb2cf27e42d55d3cacd0f909cc9e4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:59:57.396299 systemd[1]: Started cri-containerd-f61f0efdc112df795ea3414738b2ed8766cc7650a88cce4303176530454240b0.scope - libcontainer container f61f0efdc112df795ea3414738b2ed8766cc7650a88cce4303176530454240b0. Dec 16 12:59:57.401781 systemd[1]: Started cri-containerd-11736fdb7357031cceea2b57f12d9fcc6395db3deeddedc06562086dd9b345bd.scope - libcontainer container 11736fdb7357031cceea2b57f12d9fcc6395db3deeddedc06562086dd9b345bd. Dec 16 12:59:57.408531 systemd[1]: Started cri-containerd-cf0aea76ab0d043e68a3eacf4c817075afbc1e3abd34a51d38285b05255bf5fb.scope - libcontainer container cf0aea76ab0d043e68a3eacf4c817075afbc1e3abd34a51d38285b05255bf5fb. Dec 16 12:59:57.420000 audit: BPF prog-id=91 op=LOAD Dec 16 12:59:57.421000 audit: BPF prog-id=92 op=LOAD Dec 16 12:59:57.421000 audit[2527]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2492 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373336666462373335373033316363656561326235376631326439 Dec 16 12:59:57.421000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:59:57.421000 audit[2527]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2492 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373336666462373335373033316363656561326235376631326439 Dec 16 12:59:57.422000 audit: BPF prog-id=93 op=LOAD Dec 16 12:59:57.422000 audit[2527]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2492 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373336666462373335373033316363656561326235376631326439 Dec 16 12:59:57.422000 audit: BPF prog-id=94 op=LOAD Dec 16 12:59:57.422000 audit[2527]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2492 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373336666462373335373033316363656561326235376631326439 Dec 16 12:59:57.422000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:59:57.422000 audit[2527]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2492 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373336666462373335373033316363656561326235376631326439 Dec 16 12:59:57.422000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:59:57.422000 audit[2527]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2492 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373336666462373335373033316363656561326235376631326439 Dec 16 12:59:57.422000 audit: BPF prog-id=95 op=LOAD Dec 16 12:59:57.422000 audit[2527]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2492 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131373336666462373335373033316363656561326235376631326439 Dec 16 12:59:57.430000 audit: BPF prog-id=96 op=LOAD Dec 16 12:59:57.430000 audit: BPF prog-id=97 op=LOAD Dec 16 12:59:57.430000 audit[2537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=2510 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366306165613736616230643034336536386133656163663463383137 Dec 16 12:59:57.430000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:59:57.430000 audit[2537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366306165613736616230643034336536386133656163663463383137 Dec 16 12:59:57.431000 audit: BPF prog-id=98 op=LOAD Dec 16 12:59:57.431000 audit[2537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=2510 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366306165613736616230643034336536386133656163663463383137 Dec 16 12:59:57.431000 audit: BPF prog-id=99 op=LOAD Dec 16 12:59:57.431000 audit[2537]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=2510 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366306165613736616230643034336536386133656163663463383137 Dec 16 12:59:57.431000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:59:57.431000 audit[2537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366306165613736616230643034336536386133656163663463383137 Dec 16 12:59:57.432000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:59:57.432000 audit[2537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366306165613736616230643034336536386133656163663463383137 Dec 16 12:59:57.432000 audit: BPF prog-id=100 op=LOAD Dec 16 12:59:57.432000 audit[2537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=2510 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366306165613736616230643034336536386133656163663463383137 Dec 16 12:59:57.434000 audit: BPF prog-id=101 op=LOAD Dec 16 12:59:57.440000 audit: BPF prog-id=102 op=LOAD Dec 16 12:59:57.440000 audit[2513]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2478 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316630656664633131326466373935656133343134373338623265 Dec 16 12:59:57.442000 audit: BPF prog-id=102 op=UNLOAD Dec 16 12:59:57.442000 audit[2513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316630656664633131326466373935656133343134373338623265 Dec 16 12:59:57.442000 audit: BPF prog-id=103 op=LOAD Dec 16 12:59:57.442000 audit[2513]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2478 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316630656664633131326466373935656133343134373338623265 Dec 16 12:59:57.443000 audit: BPF prog-id=104 op=LOAD Dec 16 12:59:57.443000 audit[2513]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2478 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316630656664633131326466373935656133343134373338623265 Dec 16 12:59:57.443000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:59:57.443000 audit[2513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316630656664633131326466373935656133343134373338623265 Dec 16 12:59:57.443000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:59:57.443000 audit[2513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316630656664633131326466373935656133343134373338623265 Dec 16 12:59:57.443000 audit: BPF prog-id=105 op=LOAD Dec 16 12:59:57.443000 audit[2513]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2478 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316630656664633131326466373935656133343134373338623265 Dec 16 12:59:57.475670 containerd[1618]: time="2025-12-16T12:59:57.475574788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-193-244,Uid:24ddbc6ec46e114d80dc0c6d1d7ec496,Namespace:kube-system,Attempt:0,} returns sandbox id \"11736fdb7357031cceea2b57f12d9fcc6395db3deeddedc06562086dd9b345bd\"" Dec 16 12:59:57.477166 kubelet[2428]: E1216 12:59:57.477114 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 12:59:57.481403 containerd[1618]: time="2025-12-16T12:59:57.480975288Z" level=info msg="CreateContainer within sandbox \"11736fdb7357031cceea2b57f12d9fcc6395db3deeddedc06562086dd9b345bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:59:57.489810 containerd[1618]: time="2025-12-16T12:59:57.489790508Z" level=info msg="Container 12c8a1fb80f19cc5ae4269d0515c5f2292ff8e5f16471f741b7ed8a1a99f08eb: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:59:57.494255 containerd[1618]: time="2025-12-16T12:59:57.494234538Z" level=info msg="CreateContainer within sandbox \"11736fdb7357031cceea2b57f12d9fcc6395db3deeddedc06562086dd9b345bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"12c8a1fb80f19cc5ae4269d0515c5f2292ff8e5f16471f741b7ed8a1a99f08eb\"" Dec 16 12:59:57.494995 containerd[1618]: time="2025-12-16T12:59:57.494978128Z" level=info msg="StartContainer for \"12c8a1fb80f19cc5ae4269d0515c5f2292ff8e5f16471f741b7ed8a1a99f08eb\"" Dec 16 12:59:57.496314 containerd[1618]: time="2025-12-16T12:59:57.496294578Z" level=info msg="connecting to shim 12c8a1fb80f19cc5ae4269d0515c5f2292ff8e5f16471f741b7ed8a1a99f08eb" address="unix:///run/containerd/s/8240c6207912e6f67c9109acf4040300f3be7f6893ebe5253485ef830eb0bc01" protocol=ttrpc version=3 Dec 16 12:59:57.505490 containerd[1618]: time="2025-12-16T12:59:57.505470728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-193-244,Uid:51e677e3d07b0d26f5507864b6f019d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf0aea76ab0d043e68a3eacf4c817075afbc1e3abd34a51d38285b05255bf5fb\"" Dec 16 12:59:57.506281 kubelet[2428]: E1216 12:59:57.506264 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 12:59:57.511227 containerd[1618]: time="2025-12-16T12:59:57.510300418Z" level=info msg="CreateContainer within sandbox \"cf0aea76ab0d043e68a3eacf4c817075afbc1e3abd34a51d38285b05255bf5fb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:59:57.527075 containerd[1618]: time="2025-12-16T12:59:57.527046198Z" level=info msg="Container a1e6c9a1dff72456392fec68685a61633256829b268ac1feb760f8a0966390a3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:59:57.532217 systemd[1]: Started cri-containerd-12c8a1fb80f19cc5ae4269d0515c5f2292ff8e5f16471f741b7ed8a1a99f08eb.scope - libcontainer container 12c8a1fb80f19cc5ae4269d0515c5f2292ff8e5f16471f741b7ed8a1a99f08eb. Dec 16 12:59:57.535798 containerd[1618]: time="2025-12-16T12:59:57.535777438Z" level=info msg="CreateContainer within sandbox \"cf0aea76ab0d043e68a3eacf4c817075afbc1e3abd34a51d38285b05255bf5fb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a1e6c9a1dff72456392fec68685a61633256829b268ac1feb760f8a0966390a3\"" Dec 16 12:59:57.538032 containerd[1618]: time="2025-12-16T12:59:57.537810258Z" level=info msg="StartContainer for \"a1e6c9a1dff72456392fec68685a61633256829b268ac1feb760f8a0966390a3\"" Dec 16 12:59:57.540113 containerd[1618]: time="2025-12-16T12:59:57.540063148Z" level=info msg="connecting to shim a1e6c9a1dff72456392fec68685a61633256829b268ac1feb760f8a0966390a3" address="unix:///run/containerd/s/20cdc986fe5908b9f9f878ffde8ede7622acb2cf27e42d55d3cacd0f909cc9e4" protocol=ttrpc version=3 Dec 16 12:59:57.550460 containerd[1618]: time="2025-12-16T12:59:57.550431888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-193-244,Uid:4917da192cf85b32497da7945a35ade2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f61f0efdc112df795ea3414738b2ed8766cc7650a88cce4303176530454240b0\"" Dec 16 12:59:57.553554 kubelet[2428]: E1216 12:59:57.553518 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 12:59:57.559112 containerd[1618]: time="2025-12-16T12:59:57.558639698Z" level=info msg="CreateContainer within sandbox \"f61f0efdc112df795ea3414738b2ed8766cc7650a88cce4303176530454240b0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:59:57.561051 kubelet[2428]: E1216 12:59:57.560999 2428 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.193.244:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-193-244&limit=500&resourceVersion=0\": dial tcp 172.239.193.244:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:59:57.560000 audit: BPF prog-id=106 op=LOAD Dec 16 12:59:57.561000 audit: BPF prog-id=107 op=LOAD Dec 16 12:59:57.561000 audit[2597]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2492 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132633861316662383066313963633561653432363964303531356335 Dec 16 12:59:57.561000 audit: BPF prog-id=107 op=UNLOAD Dec 16 12:59:57.561000 audit[2597]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2492 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132633861316662383066313963633561653432363964303531356335 Dec 16 12:59:57.562000 audit: BPF prog-id=108 op=LOAD Dec 16 12:59:57.562000 audit[2597]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2492 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132633861316662383066313963633561653432363964303531356335 Dec 16 12:59:57.563000 audit: BPF prog-id=109 op=LOAD Dec 16 12:59:57.563000 audit[2597]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2492 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132633861316662383066313963633561653432363964303531356335 Dec 16 12:59:57.563000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:59:57.563000 audit[2597]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2492 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132633861316662383066313963633561653432363964303531356335 Dec 16 12:59:57.563000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:59:57.563000 audit[2597]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2492 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132633861316662383066313963633561653432363964303531356335 Dec 16 12:59:57.565306 containerd[1618]: time="2025-12-16T12:59:57.564915248Z" level=info msg="Container d4cba5cda1d175812614e3ac9595a9f1d64f1541e3c1ac7cf73657740efb4784: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:59:57.563000 audit: BPF prog-id=110 op=LOAD Dec 16 12:59:57.563000 audit[2597]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2492 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132633861316662383066313963633561653432363964303531356335 Dec 16 12:59:57.571227 systemd[1]: Started cri-containerd-a1e6c9a1dff72456392fec68685a61633256829b268ac1feb760f8a0966390a3.scope - libcontainer container a1e6c9a1dff72456392fec68685a61633256829b268ac1feb760f8a0966390a3. Dec 16 12:59:57.577181 containerd[1618]: time="2025-12-16T12:59:57.577150058Z" level=info msg="CreateContainer within sandbox \"f61f0efdc112df795ea3414738b2ed8766cc7650a88cce4303176530454240b0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d4cba5cda1d175812614e3ac9595a9f1d64f1541e3c1ac7cf73657740efb4784\"" Dec 16 12:59:57.578618 containerd[1618]: time="2025-12-16T12:59:57.578599488Z" level=info msg="StartContainer for \"d4cba5cda1d175812614e3ac9595a9f1d64f1541e3c1ac7cf73657740efb4784\"" Dec 16 12:59:57.580462 containerd[1618]: time="2025-12-16T12:59:57.580399078Z" level=info msg="connecting to shim d4cba5cda1d175812614e3ac9595a9f1d64f1541e3c1ac7cf73657740efb4784" address="unix:///run/containerd/s/e7ec4e39946f728a70faed4d3b76c92822698ddf072832e5d3120942c21f7182" protocol=ttrpc version=3 Dec 16 12:59:57.592000 audit: BPF prog-id=111 op=LOAD Dec 16 12:59:57.594000 audit: BPF prog-id=112 op=LOAD Dec 16 12:59:57.594000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2510 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131653663396131646666373234353633393266656336383638356136 Dec 16 12:59:57.594000 audit: BPF prog-id=112 op=UNLOAD Dec 16 12:59:57.594000 audit[2627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131653663396131646666373234353633393266656336383638356136 Dec 16 12:59:57.594000 audit: BPF prog-id=113 op=LOAD Dec 16 12:59:57.594000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2510 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131653663396131646666373234353633393266656336383638356136 Dec 16 12:59:57.594000 audit: BPF prog-id=114 op=LOAD Dec 16 12:59:57.594000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2510 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131653663396131646666373234353633393266656336383638356136 Dec 16 12:59:57.594000 audit: BPF prog-id=114 op=UNLOAD Dec 16 12:59:57.594000 audit[2627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131653663396131646666373234353633393266656336383638356136 Dec 16 12:59:57.595000 audit: BPF prog-id=113 op=UNLOAD Dec 16 12:59:57.595000 audit[2627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131653663396131646666373234353633393266656336383638356136 Dec 16 12:59:57.595000 audit: BPF prog-id=115 op=LOAD Dec 16 12:59:57.595000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2510 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131653663396131646666373234353633393266656336383638356136 Dec 16 12:59:57.610159 systemd[1]: Started cri-containerd-d4cba5cda1d175812614e3ac9595a9f1d64f1541e3c1ac7cf73657740efb4784.scope - libcontainer container d4cba5cda1d175812614e3ac9595a9f1d64f1541e3c1ac7cf73657740efb4784. Dec 16 12:59:57.634000 audit: BPF prog-id=116 op=LOAD Dec 16 12:59:57.634000 audit: BPF prog-id=117 op=LOAD Dec 16 12:59:57.634000 audit[2649]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2478 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434636261356364613164313735383132363134653361633935393561 Dec 16 12:59:57.634000 audit: BPF prog-id=117 op=UNLOAD Dec 16 12:59:57.634000 audit[2649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434636261356364613164313735383132363134653361633935393561 Dec 16 12:59:57.634000 audit: BPF prog-id=118 op=LOAD Dec 16 12:59:57.634000 audit[2649]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2478 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434636261356364613164313735383132363134653361633935393561 Dec 16 12:59:57.635000 audit: BPF prog-id=119 op=LOAD Dec 16 12:59:57.635000 audit[2649]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2478 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434636261356364613164313735383132363134653361633935393561 Dec 16 12:59:57.635000 audit: BPF prog-id=119 op=UNLOAD Dec 16 12:59:57.635000 audit[2649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434636261356364613164313735383132363134653361633935393561 Dec 16 12:59:57.635000 audit: BPF prog-id=118 op=UNLOAD Dec 16 12:59:57.635000 audit[2649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2478 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434636261356364613164313735383132363134653361633935393561 Dec 16 12:59:57.635000 audit: BPF prog-id=120 op=LOAD Dec 16 12:59:57.635000 audit[2649]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2478 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:57.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434636261356364613164313735383132363134653361633935393561 Dec 16 12:59:57.655372 containerd[1618]: time="2025-12-16T12:59:57.655347628Z" level=info msg="StartContainer for \"12c8a1fb80f19cc5ae4269d0515c5f2292ff8e5f16471f741b7ed8a1a99f08eb\" returns successfully" Dec 16 12:59:57.668792 containerd[1618]: time="2025-12-16T12:59:57.668558398Z" level=info msg="StartContainer for \"a1e6c9a1dff72456392fec68685a61633256829b268ac1feb760f8a0966390a3\" returns successfully" Dec 16 12:59:57.687943 containerd[1618]: time="2025-12-16T12:59:57.687906118Z" level=info msg="StartContainer for \"d4cba5cda1d175812614e3ac9595a9f1d64f1541e3c1ac7cf73657740efb4784\" returns successfully" Dec 16 12:59:57.881332 kubelet[2428]: I1216 12:59:57.881175 2428 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-244" Dec 16 12:59:58.342838 kubelet[2428]: E1216 12:59:58.342723 2428 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-244\" not found" node="172-239-193-244" Dec 16 12:59:58.342944 kubelet[2428]: E1216 12:59:58.342843 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 12:59:58.348024 kubelet[2428]: E1216 12:59:58.346484 2428 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-244\" not found" node="172-239-193-244" Dec 16 12:59:58.348786 kubelet[2428]: E1216 12:59:58.348724 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 12:59:58.350167 kubelet[2428]: E1216 12:59:58.350145 2428 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-244\" not found" node="172-239-193-244" Dec 16 12:59:58.350250 kubelet[2428]: E1216 12:59:58.350232 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 12:59:59.183239 kubelet[2428]: E1216 12:59:59.183182 2428 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-239-193-244\" not found" node="172-239-193-244" Dec 16 12:59:59.255430 kubelet[2428]: I1216 12:59:59.255396 2428 kubelet_node_status.go:78] "Successfully registered node" node="172-239-193-244" Dec 16 12:59:59.255430 kubelet[2428]: E1216 12:59:59.255426 2428 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-239-193-244\": node \"172-239-193-244\" not found" Dec 16 12:59:59.270230 kubelet[2428]: E1216 12:59:59.270205 2428 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-244\" not found" Dec 16 12:59:59.352878 kubelet[2428]: E1216 12:59:59.352786 2428 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-244\" not found" node="172-239-193-244" Dec 16 12:59:59.353147 kubelet[2428]: E1216 12:59:59.353095 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 12:59:59.353243 kubelet[2428]: E1216 12:59:59.353231 2428 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-244\" not found" node="172-239-193-244" Dec 16 12:59:59.353372 kubelet[2428]: E1216 12:59:59.353360 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 12:59:59.370960 kubelet[2428]: E1216 12:59:59.370938 2428 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-244\" not found" Dec 16 12:59:59.471815 kubelet[2428]: E1216 12:59:59.471727 2428 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-244\" not found" Dec 16 12:59:59.572421 kubelet[2428]: E1216 12:59:59.572393 2428 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-244\" not found" Dec 16 12:59:59.673338 kubelet[2428]: E1216 12:59:59.673315 2428 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-244\" not found" Dec 16 12:59:59.774454 kubelet[2428]: E1216 12:59:59.774376 2428 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-244\" not found" Dec 16 12:59:59.874878 kubelet[2428]: E1216 12:59:59.874815 2428 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-244\" not found" Dec 16 12:59:59.913025 kubelet[2428]: I1216 12:59:59.912972 2428 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 12:59:59.918911 kubelet[2428]: E1216 12:59:59.918888 2428 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-193-244\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 12:59:59.918911 kubelet[2428]: I1216 12:59:59.918905 2428 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 12:59:59.920967 kubelet[2428]: E1216 12:59:59.920946 2428 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-193-244\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 12:59:59.920967 kubelet[2428]: I1216 12:59:59.920960 2428 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-193-244" Dec 16 12:59:59.922431 kubelet[2428]: E1216 12:59:59.922404 2428 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-193-244\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-193-244" Dec 16 13:00:00.292470 kubelet[2428]: I1216 13:00:00.292276 2428 apiserver.go:52] "Watching apiserver" Dec 16 13:00:00.306381 kubelet[2428]: I1216 13:00:00.306334 2428 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:00:00.353141 kubelet[2428]: I1216 13:00:00.353105 2428 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 13:00:00.365324 kubelet[2428]: E1216 13:00:00.365288 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:01.354704 kubelet[2428]: E1216 13:00:01.354666 2428 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:01.463270 systemd[1]: Reload requested from client PID 2715 ('systemctl') (unit session-7.scope)... Dec 16 13:00:01.463289 systemd[1]: Reloading... Dec 16 13:00:01.565059 zram_generator::config[2762]: No configuration found. Dec 16 13:00:01.790427 systemd[1]: Reloading finished in 326 ms. Dec 16 13:00:01.822435 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:00:01.836151 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:00:01.836685 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:00:01.844782 kernel: kauditd_printk_skb: 210 callbacks suppressed Dec 16 13:00:01.844874 kernel: audit: type=1131 audit(1765890001.835:402): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:00:01.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:00:01.836877 systemd[1]: kubelet.service: Consumed 1.188s CPU time, 122.7M memory peak. Dec 16 13:00:01.843394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:00:01.844000 audit: BPF prog-id=121 op=LOAD Dec 16 13:00:01.848068 kernel: audit: type=1334 audit(1765890001.844:403): prog-id=121 op=LOAD Dec 16 13:00:01.850043 kernel: audit: type=1334 audit(1765890001.844:404): prog-id=74 op=UNLOAD Dec 16 13:00:01.844000 audit: BPF prog-id=74 op=UNLOAD Dec 16 13:00:01.847000 audit: BPF prog-id=122 op=LOAD Dec 16 13:00:01.852068 kernel: audit: type=1334 audit(1765890001.847:405): prog-id=122 op=LOAD Dec 16 13:00:01.847000 audit: BPF prog-id=88 op=UNLOAD Dec 16 13:00:01.847000 audit: BPF prog-id=123 op=LOAD Dec 16 13:00:01.856495 kernel: audit: type=1334 audit(1765890001.847:406): prog-id=88 op=UNLOAD Dec 16 13:00:01.856556 kernel: audit: type=1334 audit(1765890001.847:407): prog-id=123 op=LOAD Dec 16 13:00:01.856775 kernel: audit: type=1334 audit(1765890001.847:408): prog-id=124 op=LOAD Dec 16 13:00:01.847000 audit: BPF prog-id=124 op=LOAD Dec 16 13:00:01.858584 kernel: audit: type=1334 audit(1765890001.849:409): prog-id=89 op=UNLOAD Dec 16 13:00:01.849000 audit: BPF prog-id=89 op=UNLOAD Dec 16 13:00:01.849000 audit: BPF prog-id=90 op=UNLOAD Dec 16 13:00:01.861178 kernel: audit: type=1334 audit(1765890001.849:410): prog-id=90 op=UNLOAD Dec 16 13:00:01.849000 audit: BPF prog-id=125 op=LOAD Dec 16 13:00:01.863303 kernel: audit: type=1334 audit(1765890001.849:411): prog-id=125 op=LOAD Dec 16 13:00:01.849000 audit: BPF prog-id=70 op=UNLOAD Dec 16 13:00:01.849000 audit: BPF prog-id=126 op=LOAD Dec 16 13:00:01.851000 audit: BPF prog-id=127 op=LOAD Dec 16 13:00:01.851000 audit: BPF prog-id=71 op=UNLOAD Dec 16 13:00:01.851000 audit: BPF prog-id=72 op=UNLOAD Dec 16 13:00:01.851000 audit: BPF prog-id=128 op=LOAD Dec 16 13:00:01.851000 audit: BPF prog-id=75 op=UNLOAD Dec 16 13:00:01.866000 audit: BPF prog-id=129 op=LOAD Dec 16 13:00:01.867000 audit: BPF prog-id=130 op=LOAD Dec 16 13:00:01.867000 audit: BPF prog-id=86 op=UNLOAD Dec 16 13:00:01.867000 audit: BPF prog-id=87 op=UNLOAD Dec 16 13:00:01.869000 audit: BPF prog-id=131 op=LOAD Dec 16 13:00:01.869000 audit: BPF prog-id=83 op=UNLOAD Dec 16 13:00:01.869000 audit: BPF prog-id=132 op=LOAD Dec 16 13:00:01.869000 audit: BPF prog-id=133 op=LOAD Dec 16 13:00:01.869000 audit: BPF prog-id=84 op=UNLOAD Dec 16 13:00:01.869000 audit: BPF prog-id=85 op=UNLOAD Dec 16 13:00:01.870000 audit: BPF prog-id=134 op=LOAD Dec 16 13:00:01.870000 audit: BPF prog-id=67 op=UNLOAD Dec 16 13:00:01.870000 audit: BPF prog-id=135 op=LOAD Dec 16 13:00:01.870000 audit: BPF prog-id=136 op=LOAD Dec 16 13:00:01.870000 audit: BPF prog-id=68 op=UNLOAD Dec 16 13:00:01.870000 audit: BPF prog-id=69 op=UNLOAD Dec 16 13:00:01.871000 audit: BPF prog-id=137 op=LOAD Dec 16 13:00:01.871000 audit: BPF prog-id=76 op=UNLOAD Dec 16 13:00:01.871000 audit: BPF prog-id=138 op=LOAD Dec 16 13:00:01.872000 audit: BPF prog-id=139 op=LOAD Dec 16 13:00:01.872000 audit: BPF prog-id=77 op=UNLOAD Dec 16 13:00:01.872000 audit: BPF prog-id=78 op=UNLOAD Dec 16 13:00:01.872000 audit: BPF prog-id=140 op=LOAD Dec 16 13:00:01.872000 audit: BPF prog-id=82 op=UNLOAD Dec 16 13:00:01.873000 audit: BPF prog-id=141 op=LOAD Dec 16 13:00:01.873000 audit: BPF prog-id=79 op=UNLOAD Dec 16 13:00:01.873000 audit: BPF prog-id=142 op=LOAD Dec 16 13:00:01.873000 audit: BPF prog-id=143 op=LOAD Dec 16 13:00:01.874000 audit: BPF prog-id=80 op=UNLOAD Dec 16 13:00:01.874000 audit: BPF prog-id=81 op=UNLOAD Dec 16 13:00:01.874000 audit: BPF prog-id=144 op=LOAD Dec 16 13:00:01.874000 audit: BPF prog-id=73 op=UNLOAD Dec 16 13:00:02.032893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:00:02.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:00:02.049503 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:00:02.090992 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:00:02.090992 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:00:02.090992 kubelet[2813]: I1216 13:00:02.090571 2813 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:00:02.096912 kubelet[2813]: I1216 13:00:02.096855 2813 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:00:02.097111 kubelet[2813]: I1216 13:00:02.097100 2813 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:00:02.097177 kubelet[2813]: I1216 13:00:02.097168 2813 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:00:02.097242 kubelet[2813]: I1216 13:00:02.097232 2813 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:00:02.097482 kubelet[2813]: I1216 13:00:02.097469 2813 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:00:02.099313 kubelet[2813]: I1216 13:00:02.099273 2813 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:00:02.106684 kubelet[2813]: I1216 13:00:02.106651 2813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:00:02.110657 kubelet[2813]: I1216 13:00:02.110633 2813 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:00:02.115759 kubelet[2813]: I1216 13:00:02.115732 2813 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:00:02.116142 kubelet[2813]: I1216 13:00:02.116108 2813 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:00:02.116300 kubelet[2813]: I1216 13:00:02.116139 2813 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-193-244","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:00:02.116377 kubelet[2813]: I1216 13:00:02.116302 2813 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:00:02.116377 kubelet[2813]: I1216 13:00:02.116311 2813 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:00:02.116377 kubelet[2813]: I1216 13:00:02.116355 2813 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:00:02.117193 kubelet[2813]: I1216 13:00:02.117181 2813 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:00:02.117360 kubelet[2813]: I1216 13:00:02.117346 2813 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:00:02.117389 kubelet[2813]: I1216 13:00:02.117369 2813 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:00:02.117389 kubelet[2813]: I1216 13:00:02.117388 2813 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:00:02.117439 kubelet[2813]: I1216 13:00:02.117414 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:00:02.119026 kubelet[2813]: I1216 13:00:02.118784 2813 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:00:02.119207 kubelet[2813]: I1216 13:00:02.119190 2813 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:00:02.119243 kubelet[2813]: I1216 13:00:02.119218 2813 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:00:02.122949 kubelet[2813]: I1216 13:00:02.122928 2813 server.go:1262] "Started kubelet" Dec 16 13:00:02.124638 kubelet[2813]: I1216 13:00:02.124616 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:00:02.137178 kubelet[2813]: I1216 13:00:02.137063 2813 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:00:02.139068 kubelet[2813]: I1216 13:00:02.139054 2813 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:00:02.145138 kubelet[2813]: I1216 13:00:02.145105 2813 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:00:02.145247 kubelet[2813]: I1216 13:00:02.145232 2813 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:00:02.145468 kubelet[2813]: I1216 13:00:02.145454 2813 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:00:02.145936 kubelet[2813]: I1216 13:00:02.145920 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:00:02.149190 kubelet[2813]: I1216 13:00:02.149176 2813 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:00:02.149382 kubelet[2813]: E1216 13:00:02.149366 2813 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-244\" not found" Dec 16 13:00:02.150577 kubelet[2813]: I1216 13:00:02.150547 2813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:00:02.150760 kubelet[2813]: I1216 13:00:02.150748 2813 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:00:02.153332 kubelet[2813]: I1216 13:00:02.153293 2813 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:00:02.154493 kubelet[2813]: I1216 13:00:02.154476 2813 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:00:02.154644 kubelet[2813]: I1216 13:00:02.154627 2813 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:00:02.155790 kubelet[2813]: I1216 13:00:02.154498 2813 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:00:02.155925 kubelet[2813]: I1216 13:00:02.155893 2813 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:00:02.155982 kubelet[2813]: I1216 13:00:02.155973 2813 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:00:02.156092 kubelet[2813]: E1216 13:00:02.156075 2813 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:00:02.160653 kubelet[2813]: E1216 13:00:02.160402 2813 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:00:02.162351 kubelet[2813]: I1216 13:00:02.162337 2813 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:00:02.212887 kubelet[2813]: I1216 13:00:02.212811 2813 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:00:02.213087 kubelet[2813]: I1216 13:00:02.213073 2813 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:00:02.213187 kubelet[2813]: I1216 13:00:02.213178 2813 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:00:02.213454 kubelet[2813]: I1216 13:00:02.213392 2813 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:00:02.213454 kubelet[2813]: I1216 13:00:02.213404 2813 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:00:02.213454 kubelet[2813]: I1216 13:00:02.213421 2813 policy_none.go:49] "None policy: Start" Dec 16 13:00:02.213454 kubelet[2813]: I1216 13:00:02.213429 2813 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:00:02.213614 kubelet[2813]: I1216 13:00:02.213582 2813 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:00:02.214369 kubelet[2813]: I1216 13:00:02.213767 2813 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 13:00:02.214369 kubelet[2813]: I1216 13:00:02.213778 2813 policy_none.go:47] "Start" Dec 16 13:00:02.217854 kubelet[2813]: E1216 13:00:02.217840 2813 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:00:02.218381 kubelet[2813]: I1216 13:00:02.218369 2813 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:00:02.218672 kubelet[2813]: I1216 13:00:02.218648 2813 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:00:02.219060 kubelet[2813]: I1216 13:00:02.219043 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:00:02.219570 kubelet[2813]: E1216 13:00:02.219549 2813 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:00:02.257051 kubelet[2813]: I1216 13:00:02.256855 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-193-244" Dec 16 13:00:02.259209 kubelet[2813]: I1216 13:00:02.259180 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 13:00:02.259371 kubelet[2813]: I1216 13:00:02.259357 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 13:00:02.283305 kubelet[2813]: E1216 13:00:02.283254 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-193-244\" already exists" pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 13:00:02.320991 kubelet[2813]: I1216 13:00:02.320911 2813 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-244" Dec 16 13:00:02.328783 kubelet[2813]: I1216 13:00:02.328756 2813 kubelet_node_status.go:124] "Node was previously registered" node="172-239-193-244" Dec 16 13:00:02.328873 kubelet[2813]: I1216 13:00:02.328811 2813 kubelet_node_status.go:78] "Successfully registered node" node="172-239-193-244" Dec 16 13:00:02.352137 kubelet[2813]: I1216 13:00:02.352112 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51e677e3d07b0d26f5507864b6f019d4-ca-certs\") pod \"kube-apiserver-172-239-193-244\" (UID: \"51e677e3d07b0d26f5507864b6f019d4\") " pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 13:00:02.352313 kubelet[2813]: I1216 13:00:02.352139 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51e677e3d07b0d26f5507864b6f019d4-k8s-certs\") pod \"kube-apiserver-172-239-193-244\" (UID: \"51e677e3d07b0d26f5507864b6f019d4\") " pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 13:00:02.352313 kubelet[2813]: I1216 13:00:02.352156 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/24ddbc6ec46e114d80dc0c6d1d7ec496-flexvolume-dir\") pod \"kube-controller-manager-172-239-193-244\" (UID: \"24ddbc6ec46e114d80dc0c6d1d7ec496\") " pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 13:00:02.352313 kubelet[2813]: I1216 13:00:02.352172 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24ddbc6ec46e114d80dc0c6d1d7ec496-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-193-244\" (UID: \"24ddbc6ec46e114d80dc0c6d1d7ec496\") " pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 13:00:02.352313 kubelet[2813]: I1216 13:00:02.352189 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4917da192cf85b32497da7945a35ade2-kubeconfig\") pod \"kube-scheduler-172-239-193-244\" (UID: \"4917da192cf85b32497da7945a35ade2\") " pod="kube-system/kube-scheduler-172-239-193-244" Dec 16 13:00:02.352313 kubelet[2813]: I1216 13:00:02.352203 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51e677e3d07b0d26f5507864b6f019d4-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-193-244\" (UID: \"51e677e3d07b0d26f5507864b6f019d4\") " pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 13:00:02.352691 kubelet[2813]: I1216 13:00:02.352220 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24ddbc6ec46e114d80dc0c6d1d7ec496-ca-certs\") pod \"kube-controller-manager-172-239-193-244\" (UID: \"24ddbc6ec46e114d80dc0c6d1d7ec496\") " pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 13:00:02.352691 kubelet[2813]: I1216 13:00:02.352234 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24ddbc6ec46e114d80dc0c6d1d7ec496-k8s-certs\") pod \"kube-controller-manager-172-239-193-244\" (UID: \"24ddbc6ec46e114d80dc0c6d1d7ec496\") " pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 13:00:02.352691 kubelet[2813]: I1216 13:00:02.352250 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24ddbc6ec46e114d80dc0c6d1d7ec496-kubeconfig\") pod \"kube-controller-manager-172-239-193-244\" (UID: \"24ddbc6ec46e114d80dc0c6d1d7ec496\") " pod="kube-system/kube-controller-manager-172-239-193-244" Dec 16 13:00:02.569082 kubelet[2813]: E1216 13:00:02.568523 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:02.584077 kubelet[2813]: E1216 13:00:02.583139 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:02.584077 kubelet[2813]: E1216 13:00:02.583434 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:03.118315 kubelet[2813]: I1216 13:00:03.118279 2813 apiserver.go:52] "Watching apiserver" Dec 16 13:00:03.151345 kubelet[2813]: I1216 13:00:03.151280 2813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:00:03.189590 kubelet[2813]: I1216 13:00:03.189553 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 13:00:03.189996 kubelet[2813]: E1216 13:00:03.189962 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:03.190446 kubelet[2813]: E1216 13:00:03.190421 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:03.195289 kubelet[2813]: E1216 13:00:03.195256 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-193-244\" already exists" pod="kube-system/kube-apiserver-172-239-193-244" Dec 16 13:00:03.195370 kubelet[2813]: E1216 13:00:03.195351 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:03.207530 kubelet[2813]: I1216 13:00:03.207415 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-193-244" podStartSLOduration=1.207401418 podStartE2EDuration="1.207401418s" podCreationTimestamp="2025-12-16 13:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:00:03.201367518 +0000 UTC m=+1.147792021" watchObservedRunningTime="2025-12-16 13:00:03.207401418 +0000 UTC m=+1.153825911" Dec 16 13:00:03.213616 kubelet[2813]: I1216 13:00:03.213402 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-193-244" podStartSLOduration=1.213392088 podStartE2EDuration="1.213392088s" podCreationTimestamp="2025-12-16 13:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:00:03.207955978 +0000 UTC m=+1.154380471" watchObservedRunningTime="2025-12-16 13:00:03.213392088 +0000 UTC m=+1.159816571" Dec 16 13:00:03.221083 kubelet[2813]: I1216 13:00:03.221048 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-193-244" podStartSLOduration=3.221039688 podStartE2EDuration="3.221039688s" podCreationTimestamp="2025-12-16 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:00:03.213977848 +0000 UTC m=+1.160402331" watchObservedRunningTime="2025-12-16 13:00:03.221039688 +0000 UTC m=+1.167464181" Dec 16 13:00:03.260149 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 13:00:03.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:00:03.270000 audit: BPF prog-id=141 op=UNLOAD Dec 16 13:00:04.191444 kubelet[2813]: E1216 13:00:04.191182 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:04.191444 kubelet[2813]: E1216 13:00:04.191216 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:05.192807 kubelet[2813]: E1216 13:00:05.192778 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:06.198440 kubelet[2813]: E1216 13:00:06.198171 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:08.140204 kubelet[2813]: E1216 13:00:08.139897 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:08.196824 kubelet[2813]: E1216 13:00:08.196791 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:08.720861 kubelet[2813]: I1216 13:00:08.720832 2813 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:00:08.721234 containerd[1618]: time="2025-12-16T13:00:08.721097256Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:00:08.721829 kubelet[2813]: I1216 13:00:08.721809 2813 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:00:09.198691 kubelet[2813]: E1216 13:00:09.198455 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:09.473484 systemd[1]: Created slice kubepods-besteffort-podf385b890_279c_467b_b350_8135209d0532.slice - libcontainer container kubepods-besteffort-podf385b890_279c_467b_b350_8135209d0532.slice. Dec 16 13:00:09.591355 kubelet[2813]: I1216 13:00:09.591311 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f385b890-279c-467b-b350-8135209d0532-lib-modules\") pod \"kube-proxy-52jmz\" (UID: \"f385b890-279c-467b-b350-8135209d0532\") " pod="kube-system/kube-proxy-52jmz" Dec 16 13:00:09.591355 kubelet[2813]: I1216 13:00:09.591342 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh25t\" (UniqueName: \"kubernetes.io/projected/f385b890-279c-467b-b350-8135209d0532-kube-api-access-mh25t\") pod \"kube-proxy-52jmz\" (UID: \"f385b890-279c-467b-b350-8135209d0532\") " pod="kube-system/kube-proxy-52jmz" Dec 16 13:00:09.591355 kubelet[2813]: I1216 13:00:09.591362 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f385b890-279c-467b-b350-8135209d0532-kube-proxy\") pod \"kube-proxy-52jmz\" (UID: \"f385b890-279c-467b-b350-8135209d0532\") " pod="kube-system/kube-proxy-52jmz" Dec 16 13:00:09.591526 kubelet[2813]: I1216 13:00:09.591378 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f385b890-279c-467b-b350-8135209d0532-xtables-lock\") pod \"kube-proxy-52jmz\" (UID: \"f385b890-279c-467b-b350-8135209d0532\") " pod="kube-system/kube-proxy-52jmz" Dec 16 13:00:09.697088 kubelet[2813]: E1216 13:00:09.697052 2813 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:00:09.697088 kubelet[2813]: E1216 13:00:09.697081 2813 projected.go:196] Error preparing data for projected volume kube-api-access-mh25t for pod kube-system/kube-proxy-52jmz: configmap "kube-root-ca.crt" not found Dec 16 13:00:09.697223 kubelet[2813]: E1216 13:00:09.697130 2813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f385b890-279c-467b-b350-8135209d0532-kube-api-access-mh25t podName:f385b890-279c-467b-b350-8135209d0532 nodeName:}" failed. No retries permitted until 2025-12-16 13:00:10.197114912 +0000 UTC m=+8.143539395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mh25t" (UniqueName: "kubernetes.io/projected/f385b890-279c-467b-b350-8135209d0532-kube-api-access-mh25t") pod "kube-proxy-52jmz" (UID: "f385b890-279c-467b-b350-8135209d0532") : configmap "kube-root-ca.crt" not found Dec 16 13:00:09.945632 systemd[1]: Created slice kubepods-besteffort-podf07954f4_338f_40f9_a0d5_a293383f646d.slice - libcontainer container kubepods-besteffort-podf07954f4_338f_40f9_a0d5_a293383f646d.slice. Dec 16 13:00:09.993979 kubelet[2813]: I1216 13:00:09.993937 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p27sh\" (UniqueName: \"kubernetes.io/projected/f07954f4-338f-40f9-a0d5-a293383f646d-kube-api-access-p27sh\") pod \"tigera-operator-65cdcdfd6d-jsdjf\" (UID: \"f07954f4-338f-40f9-a0d5-a293383f646d\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-jsdjf" Dec 16 13:00:09.993979 kubelet[2813]: I1216 13:00:09.993970 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f07954f4-338f-40f9-a0d5-a293383f646d-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-jsdjf\" (UID: \"f07954f4-338f-40f9-a0d5-a293383f646d\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-jsdjf" Dec 16 13:00:10.251065 containerd[1618]: time="2025-12-16T13:00:10.250918086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-jsdjf,Uid:f07954f4-338f-40f9-a0d5-a293383f646d,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:00:10.269212 containerd[1618]: time="2025-12-16T13:00:10.269182776Z" level=info msg="connecting to shim e4e2a9445b44f22e71f4462ba475042cb21992e74a4e3b9121e9027f2f68166b" address="unix:///run/containerd/s/8ede0af0c4c59a4ad33ed61142f3078d8ee7cabce36423ea50f6afd963db6886" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:00:10.302256 systemd[1]: Started cri-containerd-e4e2a9445b44f22e71f4462ba475042cb21992e74a4e3b9121e9027f2f68166b.scope - libcontainer container e4e2a9445b44f22e71f4462ba475042cb21992e74a4e3b9121e9027f2f68166b. Dec 16 13:00:10.317000 audit: BPF prog-id=145 op=LOAD Dec 16 13:00:10.320131 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 16 13:00:10.320351 kernel: audit: type=1334 audit(1765890010.317:454): prog-id=145 op=LOAD Dec 16 13:00:10.317000 audit: BPF prog-id=146 op=LOAD Dec 16 13:00:10.324137 kernel: audit: type=1334 audit(1765890010.317:455): prog-id=146 op=LOAD Dec 16 13:00:10.327241 kernel: audit: type=1300 audit(1765890010.317:455): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2871 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.317000 audit[2883]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2871 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534653261393434356234346632326537316634343632626134373530 Dec 16 13:00:10.335036 kernel: audit: type=1327 audit(1765890010.317:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534653261393434356234346632326537316634343632626134373530 Dec 16 13:00:10.342122 kernel: audit: type=1334 audit(1765890010.318:456): prog-id=146 op=UNLOAD Dec 16 13:00:10.318000 audit: BPF prog-id=146 op=UNLOAD Dec 16 13:00:10.353722 kernel: audit: type=1300 audit(1765890010.318:456): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.318000 audit[2883]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.361572 kernel: audit: type=1327 audit(1765890010.318:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534653261393434356234346632326537316634343632626134373530 Dec 16 13:00:10.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534653261393434356234346632326537316634343632626134373530 Dec 16 13:00:10.364028 kernel: audit: type=1334 audit(1765890010.318:457): prog-id=147 op=LOAD Dec 16 13:00:10.318000 audit: BPF prog-id=147 op=LOAD Dec 16 13:00:10.318000 audit[2883]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2871 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534653261393434356234346632326537316634343632626134373530 Dec 16 13:00:10.374029 kernel: audit: type=1300 audit(1765890010.318:457): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2871 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.374069 kernel: audit: type=1327 audit(1765890010.318:457): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534653261393434356234346632326537316634343632626134373530 Dec 16 13:00:10.318000 audit: BPF prog-id=148 op=LOAD Dec 16 13:00:10.318000 audit[2883]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2871 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534653261393434356234346632326537316634343632626134373530 Dec 16 13:00:10.318000 audit: BPF prog-id=148 op=UNLOAD Dec 16 13:00:10.318000 audit[2883]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534653261393434356234346632326537316634343632626134373530 Dec 16 13:00:10.318000 audit: BPF prog-id=147 op=UNLOAD Dec 16 13:00:10.318000 audit[2883]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534653261393434356234346632326537316634343632626134373530 Dec 16 13:00:10.318000 audit: BPF prog-id=149 op=LOAD Dec 16 13:00:10.318000 audit[2883]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2871 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534653261393434356234346632326537316634343632626134373530 Dec 16 13:00:10.387055 kubelet[2813]: E1216 13:00:10.386337 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:10.387995 containerd[1618]: time="2025-12-16T13:00:10.386641539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-52jmz,Uid:f385b890-279c-467b-b350-8135209d0532,Namespace:kube-system,Attempt:0,}" Dec 16 13:00:10.396591 containerd[1618]: time="2025-12-16T13:00:10.396569478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-jsdjf,Uid:f07954f4-338f-40f9-a0d5-a293383f646d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e4e2a9445b44f22e71f4462ba475042cb21992e74a4e3b9121e9027f2f68166b\"" Dec 16 13:00:10.398832 containerd[1618]: time="2025-12-16T13:00:10.398778941Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:00:10.412718 containerd[1618]: time="2025-12-16T13:00:10.412685146Z" level=info msg="connecting to shim 044782e1742de9518b7daac347d710d2175403488c9b1ee62057b4f8f574206d" address="unix:///run/containerd/s/3c572d60dddbc4de1ef2e0b6274dca9a405302a5cb0f909f5d14dd518debb6ed" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:00:10.440170 systemd[1]: Started cri-containerd-044782e1742de9518b7daac347d710d2175403488c9b1ee62057b4f8f574206d.scope - libcontainer container 044782e1742de9518b7daac347d710d2175403488c9b1ee62057b4f8f574206d. Dec 16 13:00:10.451000 audit: BPF prog-id=150 op=LOAD Dec 16 13:00:10.452000 audit: BPF prog-id=151 op=LOAD Dec 16 13:00:10.452000 audit[2927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2916 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034343738326531373432646539353138623764616163333437643731 Dec 16 13:00:10.452000 audit: BPF prog-id=151 op=UNLOAD Dec 16 13:00:10.452000 audit[2927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2916 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034343738326531373432646539353138623764616163333437643731 Dec 16 13:00:10.452000 audit: BPF prog-id=152 op=LOAD Dec 16 13:00:10.452000 audit[2927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2916 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034343738326531373432646539353138623764616163333437643731 Dec 16 13:00:10.452000 audit: BPF prog-id=153 op=LOAD Dec 16 13:00:10.452000 audit[2927]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2916 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034343738326531373432646539353138623764616163333437643731 Dec 16 13:00:10.452000 audit: BPF prog-id=153 op=UNLOAD Dec 16 13:00:10.452000 audit[2927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2916 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034343738326531373432646539353138623764616163333437643731 Dec 16 13:00:10.452000 audit: BPF prog-id=152 op=UNLOAD Dec 16 13:00:10.452000 audit[2927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2916 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034343738326531373432646539353138623764616163333437643731 Dec 16 13:00:10.452000 audit: BPF prog-id=154 op=LOAD Dec 16 13:00:10.452000 audit[2927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2916 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034343738326531373432646539353138623764616163333437643731 Dec 16 13:00:10.475378 containerd[1618]: time="2025-12-16T13:00:10.475320287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-52jmz,Uid:f385b890-279c-467b-b350-8135209d0532,Namespace:kube-system,Attempt:0,} returns sandbox id \"044782e1742de9518b7daac347d710d2175403488c9b1ee62057b4f8f574206d\"" Dec 16 13:00:10.476231 kubelet[2813]: E1216 13:00:10.476210 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:10.480315 containerd[1618]: time="2025-12-16T13:00:10.480270356Z" level=info msg="CreateContainer within sandbox \"044782e1742de9518b7daac347d710d2175403488c9b1ee62057b4f8f574206d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:00:10.489465 containerd[1618]: time="2025-12-16T13:00:10.489440397Z" level=info msg="Container 6d1fb3bb9daf9fe69eb51618ca1076ca8590c5005620e0fbfc3de93c58677814: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:00:10.494485 containerd[1618]: time="2025-12-16T13:00:10.494442828Z" level=info msg="CreateContainer within sandbox \"044782e1742de9518b7daac347d710d2175403488c9b1ee62057b4f8f574206d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d1fb3bb9daf9fe69eb51618ca1076ca8590c5005620e0fbfc3de93c58677814\"" Dec 16 13:00:10.494919 containerd[1618]: time="2025-12-16T13:00:10.494890869Z" level=info msg="StartContainer for \"6d1fb3bb9daf9fe69eb51618ca1076ca8590c5005620e0fbfc3de93c58677814\"" Dec 16 13:00:10.496429 containerd[1618]: time="2025-12-16T13:00:10.496399845Z" level=info msg="connecting to shim 6d1fb3bb9daf9fe69eb51618ca1076ca8590c5005620e0fbfc3de93c58677814" address="unix:///run/containerd/s/3c572d60dddbc4de1ef2e0b6274dca9a405302a5cb0f909f5d14dd518debb6ed" protocol=ttrpc version=3 Dec 16 13:00:10.518155 systemd[1]: Started cri-containerd-6d1fb3bb9daf9fe69eb51618ca1076ca8590c5005620e0fbfc3de93c58677814.scope - libcontainer container 6d1fb3bb9daf9fe69eb51618ca1076ca8590c5005620e0fbfc3de93c58677814. Dec 16 13:00:10.586000 audit: BPF prog-id=155 op=LOAD Dec 16 13:00:10.586000 audit[2952]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017e488 a2=98 a3=0 items=0 ppid=2916 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664316662336262396461663966653639656235313631386361313037 Dec 16 13:00:10.586000 audit: BPF prog-id=156 op=LOAD Dec 16 13:00:10.586000 audit[2952]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017e218 a2=98 a3=0 items=0 ppid=2916 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664316662336262396461663966653639656235313631386361313037 Dec 16 13:00:10.587000 audit: BPF prog-id=156 op=UNLOAD Dec 16 13:00:10.587000 audit[2952]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2916 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664316662336262396461663966653639656235313631386361313037 Dec 16 13:00:10.587000 audit: BPF prog-id=155 op=UNLOAD Dec 16 13:00:10.587000 audit[2952]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2916 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664316662336262396461663966653639656235313631386361313037 Dec 16 13:00:10.587000 audit: BPF prog-id=157 op=LOAD Dec 16 13:00:10.587000 audit[2952]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017e6e8 a2=98 a3=0 items=0 ppid=2916 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664316662336262396461663966653639656235313631386361313037 Dec 16 13:00:10.608704 containerd[1618]: time="2025-12-16T13:00:10.608656632Z" level=info msg="StartContainer for \"6d1fb3bb9daf9fe69eb51618ca1076ca8590c5005620e0fbfc3de93c58677814\" returns successfully" Dec 16 13:00:10.824000 audit[3016]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.824000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf186c590 a2=0 a3=7ffdf186c57c items=0 ppid=2965 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.824000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:00:10.829000 audit[3020]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.829000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff36ab5040 a2=0 a3=7fff36ab502c items=0 ppid=2965 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:00:10.831000 audit[3022]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:10.831000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0bc4e020 a2=0 a3=7ffe0bc4e00c items=0 ppid=2965 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.831000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:00:10.833000 audit[3023]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:10.833000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb37d2dc0 a2=0 a3=7ffcb37d2dac items=0 ppid=2965 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.833000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:00:10.834000 audit[3021]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.834000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2466c000 a2=0 a3=7ffc2466bfec items=0 ppid=2965 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.834000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:00:10.834000 audit[3024]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:10.834000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde65ee200 a2=0 a3=7ffde65ee1ec items=0 ppid=2965 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.834000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:00:10.928000 audit[3025]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.928000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff6812cf00 a2=0 a3=7fff6812ceec items=0 ppid=2965 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.928000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:00:10.932000 audit[3027]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.932000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc47a70970 a2=0 a3=7ffc47a7095c items=0 ppid=2965 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 16 13:00:10.936000 audit[3030]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.936000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffbd3fa3e0 a2=0 a3=7fffbd3fa3cc items=0 ppid=2965 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 13:00:10.938000 audit[3031]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.938000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde08cc5a0 a2=0 a3=7ffde08cc58c items=0 ppid=2965 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.938000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:00:10.940000 audit[3033]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.940000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfc1a7370 a2=0 a3=7ffdfc1a735c items=0 ppid=2965 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:00:10.942000 audit[3034]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.942000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefffb3400 a2=0 a3=7ffefffb33ec items=0 ppid=2965 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.942000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:00:10.945000 audit[3036]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.945000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd1e3e4160 a2=0 a3=7ffd1e3e414c items=0 ppid=2965 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.945000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:00:10.949000 audit[3039]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.949000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcf345b6b0 a2=0 a3=7ffcf345b69c items=0 ppid=2965 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.949000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:00:10.951000 audit[3040]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.951000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcab75ba60 a2=0 a3=7ffcab75ba4c items=0 ppid=2965 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.951000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:00:10.954000 audit[3042]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.954000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9e131660 a2=0 a3=7ffd9e13164c items=0 ppid=2965 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.954000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:00:10.955000 audit[3043]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.955000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf6b60b70 a2=0 a3=7ffcf6b60b5c items=0 ppid=2965 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.955000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:00:10.959000 audit[3045]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.959000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4cc673e0 a2=0 a3=7ffe4cc673cc items=0 ppid=2965 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.959000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 16 13:00:10.964000 audit[3048]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.964000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec9972360 a2=0 a3=7ffec997234c items=0 ppid=2965 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.964000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 13:00:10.968000 audit[3051]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.968000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd15b1cbb0 a2=0 a3=7ffd15b1cb9c items=0 ppid=2965 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 13:00:10.970000 audit[3052]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.970000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe8ef05420 a2=0 a3=7ffe8ef0540c items=0 ppid=2965 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.970000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:00:10.973000 audit[3054]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.973000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc85f685b0 a2=0 a3=7ffc85f6859c items=0 ppid=2965 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.973000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:00:10.977000 audit[3057]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.977000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff00bdb4f0 a2=0 a3=7fff00bdb4dc items=0 ppid=2965 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.977000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:00:10.979000 audit[3058]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.979000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff31472950 a2=0 a3=7fff3147293c items=0 ppid=2965 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.979000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:00:10.982000 audit[3060]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:00:10.982000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffb1505a90 a2=0 a3=7fffb1505a7c items=0 ppid=2965 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:10.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:00:11.005000 audit[3066]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:11.005000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcf4095870 a2=0 a3=7ffcf409585c items=0 ppid=2965 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.005000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:11.012000 audit[3066]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:11.012000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcf4095870 a2=0 a3=7ffcf409585c items=0 ppid=2965 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.012000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:11.014000 audit[3072]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.014000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd91e840a0 a2=0 a3=7ffd91e8408c items=0 ppid=2965 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:00:11.018000 audit[3074]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.018000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe6cc5c090 a2=0 a3=7ffe6cc5c07c items=0 ppid=2965 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.018000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 13:00:11.024000 audit[3077]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.024000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe8c031b90 a2=0 a3=7ffe8c031b7c items=0 ppid=2965 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.024000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 16 13:00:11.026000 audit[3078]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.026000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdc047580 a2=0 a3=7ffcdc04756c items=0 ppid=2965 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.026000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:00:11.031000 audit[3080]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.031000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa88d4ff0 a2=0 a3=7fffa88d4fdc items=0 ppid=2965 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:00:11.033000 audit[3081]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.033000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0960ac40 a2=0 a3=7ffe0960ac2c items=0 ppid=2965 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:00:11.037000 audit[3083]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.037000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff36c070d0 a2=0 a3=7fff36c070bc items=0 ppid=2965 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:00:11.046000 audit[3086]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.046000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc1879bcd0 a2=0 a3=7ffc1879bcbc items=0 ppid=2965 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.046000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:00:11.048000 audit[3087]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.048000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe38a6a100 a2=0 a3=7ffe38a6a0ec items=0 ppid=2965 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.048000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:00:11.051000 audit[3089]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.051000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffd3864790 a2=0 a3=7fffd386477c items=0 ppid=2965 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.051000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:00:11.053000 audit[3090]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.053000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd352ef7e0 a2=0 a3=7ffd352ef7cc items=0 ppid=2965 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:00:11.057000 audit[3092]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.057000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9a69d480 a2=0 a3=7ffe9a69d46c items=0 ppid=2965 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.057000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 13:00:11.063000 audit[3095]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.063000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdc79adb30 a2=0 a3=7ffdc79adb1c items=0 ppid=2965 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 13:00:11.067000 audit[3098]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.067000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff90c91fe0 a2=0 a3=7fff90c91fcc items=0 ppid=2965 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 16 13:00:11.069000 audit[3099]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.069000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffff150c8f0 a2=0 a3=7ffff150c8dc items=0 ppid=2965 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.069000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:00:11.073000 audit[3101]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.073000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff99a98d00 a2=0 a3=7fff99a98cec items=0 ppid=2965 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.073000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:00:11.079000 audit[3104]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.079000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb3324840 a2=0 a3=7ffcb332482c items=0 ppid=2965 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.079000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:00:11.082000 audit[3105]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.082000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd033a700 a2=0 a3=7fffd033a6ec items=0 ppid=2965 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.082000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:00:11.085000 audit[3107]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.085000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffde8502250 a2=0 a3=7ffde850223c items=0 ppid=2965 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.085000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:00:11.087000 audit[3108]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.087000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea70f0280 a2=0 a3=7ffea70f026c items=0 ppid=2965 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.087000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:00:11.091000 audit[3110]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.091000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd7423c4d0 a2=0 a3=7ffd7423c4bc items=0 ppid=2965 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.091000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:00:11.097000 audit[3113]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:00:11.097000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffca85d63b0 a2=0 a3=7ffca85d639c items=0 ppid=2965 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:00:11.110000 audit[3115]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:00:11.110000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffddf0ba420 a2=0 a3=7ffddf0ba40c items=0 ppid=2965 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.110000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:11.110000 audit[3115]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:00:11.110000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffddf0ba420 a2=0 a3=7ffddf0ba40c items=0 ppid=2965 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:11.110000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:11.204127 kubelet[2813]: E1216 13:00:11.204048 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:11.216125 kubelet[2813]: I1216 13:00:11.216058 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-52jmz" podStartSLOduration=2.215979274 podStartE2EDuration="2.215979274s" podCreationTimestamp="2025-12-16 13:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:00:11.215963243 +0000 UTC m=+9.162387726" watchObservedRunningTime="2025-12-16 13:00:11.215979274 +0000 UTC m=+9.162403777" Dec 16 13:00:12.405560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17170329.mount: Deactivated successfully. Dec 16 13:00:12.961441 containerd[1618]: time="2025-12-16T13:00:12.961377966Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:12.963100 containerd[1618]: time="2025-12-16T13:00:12.963077902Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 13:00:12.963226 containerd[1618]: time="2025-12-16T13:00:12.963208194Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:12.965368 containerd[1618]: time="2025-12-16T13:00:12.964715296Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:12.965368 containerd[1618]: time="2025-12-16T13:00:12.965283248Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.566369824s" Dec 16 13:00:12.965368 containerd[1618]: time="2025-12-16T13:00:12.965303109Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:00:12.968630 containerd[1618]: time="2025-12-16T13:00:12.968600849Z" level=info msg="CreateContainer within sandbox \"e4e2a9445b44f22e71f4462ba475042cb21992e74a4e3b9121e9027f2f68166b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:00:12.974054 containerd[1618]: time="2025-12-16T13:00:12.974024024Z" level=info msg="Container ebd0ac765cd3575c711ee5e26c9cf7e569fde2f8d7eaaf78394e2cca8c6660f4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:00:12.984260 containerd[1618]: time="2025-12-16T13:00:12.984236670Z" level=info msg="CreateContainer within sandbox \"e4e2a9445b44f22e71f4462ba475042cb21992e74a4e3b9121e9027f2f68166b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ebd0ac765cd3575c711ee5e26c9cf7e569fde2f8d7eaaf78394e2cca8c6660f4\"" Dec 16 13:00:12.985192 containerd[1618]: time="2025-12-16T13:00:12.985132099Z" level=info msg="StartContainer for \"ebd0ac765cd3575c711ee5e26c9cf7e569fde2f8d7eaaf78394e2cca8c6660f4\"" Dec 16 13:00:12.985852 containerd[1618]: time="2025-12-16T13:00:12.985818374Z" level=info msg="connecting to shim ebd0ac765cd3575c711ee5e26c9cf7e569fde2f8d7eaaf78394e2cca8c6660f4" address="unix:///run/containerd/s/8ede0af0c4c59a4ad33ed61142f3078d8ee7cabce36423ea50f6afd963db6886" protocol=ttrpc version=3 Dec 16 13:00:13.007154 systemd[1]: Started cri-containerd-ebd0ac765cd3575c711ee5e26c9cf7e569fde2f8d7eaaf78394e2cca8c6660f4.scope - libcontainer container ebd0ac765cd3575c711ee5e26c9cf7e569fde2f8d7eaaf78394e2cca8c6660f4. Dec 16 13:00:13.020000 audit: BPF prog-id=158 op=LOAD Dec 16 13:00:13.020000 audit: BPF prog-id=159 op=LOAD Dec 16 13:00:13.020000 audit[3125]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e238 a2=98 a3=0 items=0 ppid=2871 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:13.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643061633736356364333537356337313165653565323663396366 Dec 16 13:00:13.020000 audit: BPF prog-id=159 op=UNLOAD Dec 16 13:00:13.020000 audit[3125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:13.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643061633736356364333537356337313165653565323663396366 Dec 16 13:00:13.020000 audit: BPF prog-id=160 op=LOAD Dec 16 13:00:13.020000 audit[3125]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e488 a2=98 a3=0 items=0 ppid=2871 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:13.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643061633736356364333537356337313165653565323663396366 Dec 16 13:00:13.021000 audit: BPF prog-id=161 op=LOAD Dec 16 13:00:13.021000 audit[3125]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017e218 a2=98 a3=0 items=0 ppid=2871 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:13.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643061633736356364333537356337313165653565323663396366 Dec 16 13:00:13.021000 audit: BPF prog-id=161 op=UNLOAD Dec 16 13:00:13.021000 audit[3125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:13.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643061633736356364333537356337313165653565323663396366 Dec 16 13:00:13.021000 audit: BPF prog-id=160 op=UNLOAD Dec 16 13:00:13.021000 audit[3125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2871 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:13.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643061633736356364333537356337313165653565323663396366 Dec 16 13:00:13.021000 audit: BPF prog-id=162 op=LOAD Dec 16 13:00:13.021000 audit[3125]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e6e8 a2=98 a3=0 items=0 ppid=2871 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:13.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562643061633736356364333537356337313165653565323663396366 Dec 16 13:00:13.039914 containerd[1618]: time="2025-12-16T13:00:13.039885068Z" level=info msg="StartContainer for \"ebd0ac765cd3575c711ee5e26c9cf7e569fde2f8d7eaaf78394e2cca8c6660f4\" returns successfully" Dec 16 13:00:14.191637 kubelet[2813]: E1216 13:00:14.191429 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:14.203381 kubelet[2813]: I1216 13:00:14.203275 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-jsdjf" podStartSLOduration=2.635188731 podStartE2EDuration="5.203261043s" podCreationTimestamp="2025-12-16 13:00:09 +0000 UTC" firstStartedPulling="2025-12-16 13:00:10.398159656 +0000 UTC m=+8.344584139" lastFinishedPulling="2025-12-16 13:00:12.966231968 +0000 UTC m=+10.912656451" observedRunningTime="2025-12-16 13:00:13.224552727 +0000 UTC m=+11.170977210" watchObservedRunningTime="2025-12-16 13:00:14.203261043 +0000 UTC m=+12.149685526" Dec 16 13:00:16.203345 kubelet[2813]: E1216 13:00:16.203284 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:18.076187 update_engine[1601]: I20251216 13:00:18.076112 1601 update_attempter.cc:509] Updating boot flags... Dec 16 13:00:18.552000 audit[1871]: USER_END pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:00:18.553453 sudo[1871]: pam_unix(sudo:session): session closed for user root Dec 16 13:00:18.554154 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 13:00:18.554222 kernel: audit: type=1106 audit(1765890018.552:534): pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:00:18.560000 audit[1871]: CRED_DISP pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:00:18.571026 kernel: audit: type=1104 audit(1765890018.560:535): pid=1871 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:00:18.614643 sshd[1870]: Connection closed by 147.75.109.163 port 59048 Dec 16 13:00:18.615402 sshd-session[1867]: pam_unix(sshd:session): session closed for user core Dec 16 13:00:18.616000 audit[1867]: USER_END pid=1867 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:00:18.621578 systemd[1]: sshd@6-172.239.193.244:22-147.75.109.163:59048.service: Deactivated successfully. Dec 16 13:00:18.626648 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:00:18.627047 kernel: audit: type=1106 audit(1765890018.616:536): pid=1867 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:00:18.627869 systemd[1]: session-7.scope: Consumed 4.426s CPU time, 226.9M memory peak. Dec 16 13:00:18.616000 audit[1867]: CRED_DISP pid=1867 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:00:18.641045 kernel: audit: type=1104 audit(1765890018.616:537): pid=1867 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:00:18.642692 systemd-logind[1599]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:00:18.645256 systemd-logind[1599]: Removed session 7. Dec 16 13:00:18.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.239.193.244:22-147.75.109.163:59048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:00:18.654038 kernel: audit: type=1131 audit(1765890018.621:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.239.193.244:22-147.75.109.163:59048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:00:19.399000 audit[3232]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:19.406053 kernel: audit: type=1325 audit(1765890019.399:539): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:19.399000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff2dc34c50 a2=0 a3=7fff2dc34c3c items=0 ppid=2965 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:19.418047 kernel: audit: type=1300 audit(1765890019.399:539): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff2dc34c50 a2=0 a3=7fff2dc34c3c items=0 ppid=2965 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:19.418416 kernel: audit: type=1327 audit(1765890019.399:539): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:19.399000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:19.429000 audit[3232]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:19.445044 kernel: audit: type=1325 audit(1765890019.429:540): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:19.429000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2dc34c50 a2=0 a3=0 items=0 ppid=2965 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:19.455064 kernel: audit: type=1300 audit(1765890019.429:540): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2dc34c50 a2=0 a3=0 items=0 ppid=2965 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:19.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:20.465000 audit[3234]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:20.465000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd8df6b970 a2=0 a3=7ffd8df6b95c items=0 ppid=2965 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:20.465000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:20.467000 audit[3234]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:20.467000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8df6b970 a2=0 a3=0 items=0 ppid=2965 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:20.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:21.486000 audit[3236]: NETFILTER_CFG table=filter:109 family=2 entries=18 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:21.486000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffca8244c90 a2=0 a3=7ffca8244c7c items=0 ppid=2965 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:21.486000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:21.490000 audit[3236]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:21.490000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca8244c90 a2=0 a3=0 items=0 ppid=2965 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:21.490000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:23.047314 systemd[1]: Created slice kubepods-besteffort-pod7bc167bc_e1a6_4219_9c06_42aabcd0ea35.slice - libcontainer container kubepods-besteffort-pod7bc167bc_e1a6_4219_9c06_42aabcd0ea35.slice. Dec 16 13:00:23.063000 audit[3238]: NETFILTER_CFG table=filter:111 family=2 entries=21 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:23.063000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc1ffd4750 a2=0 a3=7ffc1ffd473c items=0 ppid=2965 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.063000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:23.081862 kubelet[2813]: I1216 13:00:23.081803 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bc167bc-e1a6-4219-9c06-42aabcd0ea35-tigera-ca-bundle\") pod \"calico-typha-69755b49f6-tmn5c\" (UID: \"7bc167bc-e1a6-4219-9c06-42aabcd0ea35\") " pod="calico-system/calico-typha-69755b49f6-tmn5c" Dec 16 13:00:23.083328 kubelet[2813]: I1216 13:00:23.081972 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrq4\" (UniqueName: \"kubernetes.io/projected/7bc167bc-e1a6-4219-9c06-42aabcd0ea35-kube-api-access-cqrq4\") pod \"calico-typha-69755b49f6-tmn5c\" (UID: \"7bc167bc-e1a6-4219-9c06-42aabcd0ea35\") " pod="calico-system/calico-typha-69755b49f6-tmn5c" Dec 16 13:00:23.083328 kubelet[2813]: I1216 13:00:23.082044 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7bc167bc-e1a6-4219-9c06-42aabcd0ea35-typha-certs\") pod \"calico-typha-69755b49f6-tmn5c\" (UID: \"7bc167bc-e1a6-4219-9c06-42aabcd0ea35\") " pod="calico-system/calico-typha-69755b49f6-tmn5c" Dec 16 13:00:23.088000 audit[3238]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:23.088000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1ffd4750 a2=0 a3=0 items=0 ppid=2965 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.088000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:23.184045 kubelet[2813]: I1216 13:00:23.182618 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/87d4d58b-3be5-4332-8fcf-f0954ed501fc-cni-bin-dir\") pod \"calico-node-cfknv\" (UID: \"87d4d58b-3be5-4332-8fcf-f0954ed501fc\") " pod="calico-system/calico-node-cfknv" Dec 16 13:00:23.184045 kubelet[2813]: I1216 13:00:23.182663 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87d4d58b-3be5-4332-8fcf-f0954ed501fc-lib-modules\") pod \"calico-node-cfknv\" (UID: \"87d4d58b-3be5-4332-8fcf-f0954ed501fc\") " pod="calico-system/calico-node-cfknv" Dec 16 13:00:23.184045 kubelet[2813]: I1216 13:00:23.182689 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/87d4d58b-3be5-4332-8fcf-f0954ed501fc-var-run-calico\") pod \"calico-node-cfknv\" (UID: \"87d4d58b-3be5-4332-8fcf-f0954ed501fc\") " pod="calico-system/calico-node-cfknv" Dec 16 13:00:23.184045 kubelet[2813]: I1216 13:00:23.182730 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/87d4d58b-3be5-4332-8fcf-f0954ed501fc-policysync\") pod \"calico-node-cfknv\" (UID: \"87d4d58b-3be5-4332-8fcf-f0954ed501fc\") " pod="calico-system/calico-node-cfknv" Dec 16 13:00:23.184045 kubelet[2813]: I1216 13:00:23.182759 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/87d4d58b-3be5-4332-8fcf-f0954ed501fc-node-certs\") pod \"calico-node-cfknv\" (UID: \"87d4d58b-3be5-4332-8fcf-f0954ed501fc\") " pod="calico-system/calico-node-cfknv" Dec 16 13:00:23.184264 kubelet[2813]: I1216 13:00:23.182799 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/87d4d58b-3be5-4332-8fcf-f0954ed501fc-cni-net-dir\") pod \"calico-node-cfknv\" (UID: \"87d4d58b-3be5-4332-8fcf-f0954ed501fc\") " pod="calico-system/calico-node-cfknv" Dec 16 13:00:23.184264 kubelet[2813]: I1216 13:00:23.182825 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/87d4d58b-3be5-4332-8fcf-f0954ed501fc-flexvol-driver-host\") pod \"calico-node-cfknv\" (UID: \"87d4d58b-3be5-4332-8fcf-f0954ed501fc\") " pod="calico-system/calico-node-cfknv" Dec 16 13:00:23.184264 kubelet[2813]: I1216 13:00:23.182851 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/87d4d58b-3be5-4332-8fcf-f0954ed501fc-var-lib-calico\") pod \"calico-node-cfknv\" (UID: \"87d4d58b-3be5-4332-8fcf-f0954ed501fc\") " pod="calico-system/calico-node-cfknv" Dec 16 13:00:23.184264 kubelet[2813]: I1216 13:00:23.182875 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/87d4d58b-3be5-4332-8fcf-f0954ed501fc-cni-log-dir\") pod \"calico-node-cfknv\" (UID: \"87d4d58b-3be5-4332-8fcf-f0954ed501fc\") " pod="calico-system/calico-node-cfknv" Dec 16 13:00:23.184264 kubelet[2813]: I1216 13:00:23.182897 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87d4d58b-3be5-4332-8fcf-f0954ed501fc-xtables-lock\") pod \"calico-node-cfknv\" (UID: \"87d4d58b-3be5-4332-8fcf-f0954ed501fc\") " pod="calico-system/calico-node-cfknv" Dec 16 13:00:23.184367 kubelet[2813]: I1216 13:00:23.182918 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87d4d58b-3be5-4332-8fcf-f0954ed501fc-tigera-ca-bundle\") pod \"calico-node-cfknv\" (UID: \"87d4d58b-3be5-4332-8fcf-f0954ed501fc\") " pod="calico-system/calico-node-cfknv" Dec 16 13:00:23.184367 kubelet[2813]: I1216 13:00:23.182940 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z94w9\" (UniqueName: \"kubernetes.io/projected/87d4d58b-3be5-4332-8fcf-f0954ed501fc-kube-api-access-z94w9\") pod \"calico-node-cfknv\" (UID: \"87d4d58b-3be5-4332-8fcf-f0954ed501fc\") " pod="calico-system/calico-node-cfknv" Dec 16 13:00:23.189832 systemd[1]: Created slice kubepods-besteffort-pod87d4d58b_3be5_4332_8fcf_f0954ed501fc.slice - libcontainer container kubepods-besteffort-pod87d4d58b_3be5_4332_8fcf_f0954ed501fc.slice. Dec 16 13:00:23.285559 kubelet[2813]: E1216 13:00:23.285533 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.285703 kubelet[2813]: W1216 13:00:23.285688 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.285760 kubelet[2813]: E1216 13:00:23.285748 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.286107 kubelet[2813]: E1216 13:00:23.286096 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.286170 kubelet[2813]: W1216 13:00:23.286159 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.286220 kubelet[2813]: E1216 13:00:23.286210 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.286469 kubelet[2813]: E1216 13:00:23.286458 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.286545 kubelet[2813]: W1216 13:00:23.286514 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.286619 kubelet[2813]: E1216 13:00:23.286600 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.286856 kubelet[2813]: E1216 13:00:23.286845 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.286966 kubelet[2813]: W1216 13:00:23.286901 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.286966 kubelet[2813]: E1216 13:00:23.286913 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.287316 kubelet[2813]: E1216 13:00:23.287305 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.287383 kubelet[2813]: W1216 13:00:23.287373 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.287427 kubelet[2813]: E1216 13:00:23.287418 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.291755 kubelet[2813]: E1216 13:00:23.291733 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.291755 kubelet[2813]: W1216 13:00:23.291752 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.291830 kubelet[2813]: E1216 13:00:23.291770 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.299892 kubelet[2813]: E1216 13:00:23.299820 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.299892 kubelet[2813]: W1216 13:00:23.299837 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.299892 kubelet[2813]: E1216 13:00:23.299848 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.356339 kubelet[2813]: E1216 13:00:23.355566 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:23.357323 containerd[1618]: time="2025-12-16T13:00:23.357241591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69755b49f6-tmn5c,Uid:7bc167bc-e1a6-4219-9c06-42aabcd0ea35,Namespace:calico-system,Attempt:0,}" Dec 16 13:00:23.387518 containerd[1618]: time="2025-12-16T13:00:23.387460926Z" level=info msg="connecting to shim bcda1ffa7da8f7a274d14bc33d1a6a7c4ec21f485f50aade45a26dd2bfeee712" address="unix:///run/containerd/s/8c33cdcd4721dbb4e4e85262efa73b6a89307209789608d9baf5b4a9a442476a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:00:23.396140 kubelet[2813]: E1216 13:00:23.395300 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:00:23.441180 systemd[1]: Started cri-containerd-bcda1ffa7da8f7a274d14bc33d1a6a7c4ec21f485f50aade45a26dd2bfeee712.scope - libcontainer container bcda1ffa7da8f7a274d14bc33d1a6a7c4ec21f485f50aade45a26dd2bfeee712. Dec 16 13:00:23.467000 audit: BPF prog-id=163 op=LOAD Dec 16 13:00:23.467000 audit: BPF prog-id=164 op=LOAD Dec 16 13:00:23.467000 audit[3275]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3259 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646131666661376461386637613237346431346263333364316136 Dec 16 13:00:23.468000 audit: BPF prog-id=164 op=UNLOAD Dec 16 13:00:23.468000 audit[3275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646131666661376461386637613237346431346263333364316136 Dec 16 13:00:23.468000 audit: BPF prog-id=165 op=LOAD Dec 16 13:00:23.468000 audit[3275]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3259 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646131666661376461386637613237346431346263333364316136 Dec 16 13:00:23.468000 audit: BPF prog-id=166 op=LOAD Dec 16 13:00:23.468000 audit[3275]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3259 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646131666661376461386637613237346431346263333364316136 Dec 16 13:00:23.468000 audit: BPF prog-id=166 op=UNLOAD Dec 16 13:00:23.468000 audit[3275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646131666661376461386637613237346431346263333364316136 Dec 16 13:00:23.468000 audit: BPF prog-id=165 op=UNLOAD Dec 16 13:00:23.468000 audit[3275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646131666661376461386637613237346431346263333364316136 Dec 16 13:00:23.469000 audit: BPF prog-id=167 op=LOAD Dec 16 13:00:23.469000 audit[3275]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3259 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263646131666661376461386637613237346431346263333364316136 Dec 16 13:00:23.485181 kubelet[2813]: E1216 13:00:23.485065 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.485181 kubelet[2813]: W1216 13:00:23.485108 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.485181 kubelet[2813]: E1216 13:00:23.485132 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.486097 kubelet[2813]: E1216 13:00:23.486083 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.486375 kubelet[2813]: W1216 13:00:23.486260 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.486375 kubelet[2813]: E1216 13:00:23.486277 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.486680 kubelet[2813]: E1216 13:00:23.486653 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.486944 kubelet[2813]: W1216 13:00:23.486874 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.486944 kubelet[2813]: E1216 13:00:23.486889 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.487775 kubelet[2813]: E1216 13:00:23.487722 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.487775 kubelet[2813]: W1216 13:00:23.487733 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.487775 kubelet[2813]: E1216 13:00:23.487743 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.488505 kubelet[2813]: E1216 13:00:23.488463 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.488505 kubelet[2813]: W1216 13:00:23.488474 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.488505 kubelet[2813]: E1216 13:00:23.488483 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.489228 kubelet[2813]: E1216 13:00:23.489215 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.489386 kubelet[2813]: W1216 13:00:23.489311 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.489386 kubelet[2813]: E1216 13:00:23.489332 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.489687 kubelet[2813]: E1216 13:00:23.489653 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.489687 kubelet[2813]: W1216 13:00:23.489665 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.489775 kubelet[2813]: E1216 13:00:23.489764 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.490106 kubelet[2813]: E1216 13:00:23.490079 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.490303 kubelet[2813]: W1216 13:00:23.490212 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.490303 kubelet[2813]: E1216 13:00:23.490228 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.491349 kubelet[2813]: E1216 13:00:23.491318 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.491349 kubelet[2813]: W1216 13:00:23.491328 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.491349 kubelet[2813]: E1216 13:00:23.491337 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.491627 kubelet[2813]: I1216 13:00:23.491561 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c658f10-e923-42a5-b425-72ee5f2a64c8-kubelet-dir\") pod \"csi-node-driver-n8llz\" (UID: \"1c658f10-e923-42a5-b425-72ee5f2a64c8\") " pod="calico-system/csi-node-driver-n8llz" Dec 16 13:00:23.491808 kubelet[2813]: E1216 13:00:23.491779 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.491808 kubelet[2813]: W1216 13:00:23.491789 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.491808 kubelet[2813]: E1216 13:00:23.491797 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.492239 kubelet[2813]: E1216 13:00:23.492194 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.492239 kubelet[2813]: W1216 13:00:23.492205 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.492396 kubelet[2813]: E1216 13:00:23.492214 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.492396 kubelet[2813]: I1216 13:00:23.492344 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1c658f10-e923-42a5-b425-72ee5f2a64c8-registration-dir\") pod \"csi-node-driver-n8llz\" (UID: \"1c658f10-e923-42a5-b425-72ee5f2a64c8\") " pod="calico-system/csi-node-driver-n8llz" Dec 16 13:00:23.492740 kubelet[2813]: E1216 13:00:23.492715 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.492837 kubelet[2813]: W1216 13:00:23.492783 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.492837 kubelet[2813]: E1216 13:00:23.492795 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.493101 kubelet[2813]: E1216 13:00:23.493080 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.493193 kubelet[2813]: W1216 13:00:23.493154 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.493193 kubelet[2813]: E1216 13:00:23.493167 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.493798 kubelet[2813]: E1216 13:00:23.493764 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.493798 kubelet[2813]: W1216 13:00:23.493777 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.493798 kubelet[2813]: E1216 13:00:23.493786 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.494211 kubelet[2813]: E1216 13:00:23.494177 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.494289 kubelet[2813]: W1216 13:00:23.494256 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.494289 kubelet[2813]: E1216 13:00:23.494278 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.494624 kubelet[2813]: E1216 13:00:23.494594 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.494624 kubelet[2813]: W1216 13:00:23.494604 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.494624 kubelet[2813]: E1216 13:00:23.494613 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.495241 kubelet[2813]: E1216 13:00:23.495116 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.495241 kubelet[2813]: W1216 13:00:23.495126 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.495241 kubelet[2813]: E1216 13:00:23.495136 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.495753 kubelet[2813]: E1216 13:00:23.495591 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.495753 kubelet[2813]: W1216 13:00:23.495727 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.495753 kubelet[2813]: E1216 13:00:23.495739 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.496182 kubelet[2813]: E1216 13:00:23.496148 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.496182 kubelet[2813]: W1216 13:00:23.496159 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.496182 kubelet[2813]: E1216 13:00:23.496167 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.496723 kubelet[2813]: E1216 13:00:23.496666 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.496723 kubelet[2813]: W1216 13:00:23.496677 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.496723 kubelet[2813]: E1216 13:00:23.496685 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.497155 kubelet[2813]: E1216 13:00:23.497135 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.497155 kubelet[2813]: W1216 13:00:23.497151 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.497218 kubelet[2813]: E1216 13:00:23.497163 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.498206 kubelet[2813]: E1216 13:00:23.498183 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.498206 kubelet[2813]: W1216 13:00:23.498199 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.498275 kubelet[2813]: E1216 13:00:23.498208 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.498472 kubelet[2813]: E1216 13:00:23.498432 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.498472 kubelet[2813]: W1216 13:00:23.498446 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.498472 kubelet[2813]: E1216 13:00:23.498454 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.498650 kubelet[2813]: E1216 13:00:23.498626 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.498650 kubelet[2813]: W1216 13:00:23.498639 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.498650 kubelet[2813]: E1216 13:00:23.498647 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.499199 kubelet[2813]: E1216 13:00:23.499175 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.499199 kubelet[2813]: W1216 13:00:23.499189 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.499199 kubelet[2813]: E1216 13:00:23.499198 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.499697 kubelet[2813]: E1216 13:00:23.499675 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.499976 kubelet[2813]: W1216 13:00:23.499945 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.499976 kubelet[2813]: E1216 13:00:23.499966 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.501757 kubelet[2813]: E1216 13:00:23.501738 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:23.502426 containerd[1618]: time="2025-12-16T13:00:23.502372563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cfknv,Uid:87d4d58b-3be5-4332-8fcf-f0954ed501fc,Namespace:calico-system,Attempt:0,}" Dec 16 13:00:23.520916 containerd[1618]: time="2025-12-16T13:00:23.520840846Z" level=info msg="connecting to shim 3b92f585c5791e97aa395966fbf2110812a3b700120c9ad6c4d60c2fc820c953" address="unix:///run/containerd/s/d4396dc045f18efd3348b72759840fecd005b361a729f2c539fa7d884d8c656b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:00:23.553263 systemd[1]: Started cri-containerd-3b92f585c5791e97aa395966fbf2110812a3b700120c9ad6c4d60c2fc820c953.scope - libcontainer container 3b92f585c5791e97aa395966fbf2110812a3b700120c9ad6c4d60c2fc820c953. Dec 16 13:00:23.559836 containerd[1618]: time="2025-12-16T13:00:23.559765712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69755b49f6-tmn5c,Uid:7bc167bc-e1a6-4219-9c06-42aabcd0ea35,Namespace:calico-system,Attempt:0,} returns sandbox id \"bcda1ffa7da8f7a274d14bc33d1a6a7c4ec21f485f50aade45a26dd2bfeee712\"" Dec 16 13:00:23.561041 kubelet[2813]: E1216 13:00:23.560956 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:23.562295 containerd[1618]: time="2025-12-16T13:00:23.562221917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:00:23.581139 kernel: kauditd_printk_skb: 41 callbacks suppressed Dec 16 13:00:23.581211 kernel: audit: type=1334 audit(1765890023.577:555): prog-id=168 op=LOAD Dec 16 13:00:23.577000 audit: BPF prog-id=168 op=LOAD Dec 16 13:00:23.581000 audit: BPF prog-id=169 op=LOAD Dec 16 13:00:23.581000 audit[3350]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3339 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.586139 kernel: audit: type=1334 audit(1765890023.581:556): prog-id=169 op=LOAD Dec 16 13:00:23.586190 kernel: audit: type=1300 audit(1765890023.581:556): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3339 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.594912 kubelet[2813]: E1216 13:00:23.594881 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.594912 kubelet[2813]: W1216 13:00:23.594906 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.595301 kubelet[2813]: E1216 13:00:23.594931 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.595301 kubelet[2813]: I1216 13:00:23.594960 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1c658f10-e923-42a5-b425-72ee5f2a64c8-socket-dir\") pod \"csi-node-driver-n8llz\" (UID: \"1c658f10-e923-42a5-b425-72ee5f2a64c8\") " pod="calico-system/csi-node-driver-n8llz" Dec 16 13:00:23.595301 kubelet[2813]: E1216 13:00:23.595228 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.595301 kubelet[2813]: W1216 13:00:23.595242 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.595301 kubelet[2813]: E1216 13:00:23.595255 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.595547 kubelet[2813]: E1216 13:00:23.595528 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.595547 kubelet[2813]: W1216 13:00:23.595542 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.595547 kubelet[2813]: E1216 13:00:23.595551 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.595681 kubelet[2813]: I1216 13:00:23.595578 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ss5g\" (UniqueName: \"kubernetes.io/projected/1c658f10-e923-42a5-b425-72ee5f2a64c8-kube-api-access-7ss5g\") pod \"csi-node-driver-n8llz\" (UID: \"1c658f10-e923-42a5-b425-72ee5f2a64c8\") " pod="calico-system/csi-node-driver-n8llz" Dec 16 13:00:23.595845 kubelet[2813]: E1216 13:00:23.595826 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.595845 kubelet[2813]: W1216 13:00:23.595841 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.595942 kubelet[2813]: E1216 13:00:23.595850 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.595942 kubelet[2813]: I1216 13:00:23.595875 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1c658f10-e923-42a5-b425-72ee5f2a64c8-varrun\") pod \"csi-node-driver-n8llz\" (UID: \"1c658f10-e923-42a5-b425-72ee5f2a64c8\") " pod="calico-system/csi-node-driver-n8llz" Dec 16 13:00:23.596358 kubelet[2813]: E1216 13:00:23.596338 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.596358 kubelet[2813]: W1216 13:00:23.596347 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.596358 kubelet[2813]: E1216 13:00:23.596356 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.596844 kubelet[2813]: E1216 13:00:23.596780 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.596844 kubelet[2813]: W1216 13:00:23.596794 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.596844 kubelet[2813]: E1216 13:00:23.596808 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.597522 kubelet[2813]: E1216 13:00:23.597467 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.597522 kubelet[2813]: W1216 13:00:23.597485 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.597676 kubelet[2813]: E1216 13:00:23.597659 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362393266353835633537393165393761613339353936366662663231 Dec 16 13:00:23.598527 kubelet[2813]: E1216 13:00:23.598515 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.598701 kubelet[2813]: W1216 13:00:23.598611 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.598701 kubelet[2813]: E1216 13:00:23.598624 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.600262 kubelet[2813]: E1216 13:00:23.600157 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.600262 kubelet[2813]: W1216 13:00:23.600185 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.600262 kubelet[2813]: E1216 13:00:23.600196 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.600742 kubelet[2813]: E1216 13:00:23.600710 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.600742 kubelet[2813]: W1216 13:00:23.600721 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.600742 kubelet[2813]: E1216 13:00:23.600730 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.601109 kubelet[2813]: E1216 13:00:23.601078 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.601109 kubelet[2813]: W1216 13:00:23.601089 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.601109 kubelet[2813]: E1216 13:00:23.601097 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.601417 kubelet[2813]: E1216 13:00:23.601388 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.601417 kubelet[2813]: W1216 13:00:23.601398 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.601417 kubelet[2813]: E1216 13:00:23.601406 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.601766 kubelet[2813]: E1216 13:00:23.601736 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.601766 kubelet[2813]: W1216 13:00:23.601747 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.601766 kubelet[2813]: E1216 13:00:23.601755 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.602143 kubelet[2813]: E1216 13:00:23.602113 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.602143 kubelet[2813]: W1216 13:00:23.602123 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.602143 kubelet[2813]: E1216 13:00:23.602132 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.602468 kubelet[2813]: E1216 13:00:23.602438 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.602468 kubelet[2813]: W1216 13:00:23.602448 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.602468 kubelet[2813]: E1216 13:00:23.602456 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.602841 kubelet[2813]: E1216 13:00:23.602811 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.602841 kubelet[2813]: W1216 13:00:23.602822 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.602841 kubelet[2813]: E1216 13:00:23.602830 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.603394 kubelet[2813]: E1216 13:00:23.603365 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.603394 kubelet[2813]: W1216 13:00:23.603375 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.603394 kubelet[2813]: E1216 13:00:23.603383 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.603738 kubelet[2813]: E1216 13:00:23.603709 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.603738 kubelet[2813]: W1216 13:00:23.603719 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.603738 kubelet[2813]: E1216 13:00:23.603727 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.604112 kubelet[2813]: E1216 13:00:23.604074 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.604112 kubelet[2813]: W1216 13:00:23.604085 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.604112 kubelet[2813]: E1216 13:00:23.604093 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.581000 audit: BPF prog-id=169 op=UNLOAD Dec 16 13:00:23.627654 kernel: audit: type=1327 audit(1765890023.581:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362393266353835633537393165393761613339353936366662663231 Dec 16 13:00:23.627703 kernel: audit: type=1334 audit(1765890023.581:557): prog-id=169 op=UNLOAD Dec 16 13:00:23.635983 kernel: audit: type=1300 audit(1765890023.581:557): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.581000 audit[3350]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362393266353835633537393165393761613339353936366662663231 Dec 16 13:00:23.645462 kernel: audit: type=1327 audit(1765890023.581:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362393266353835633537393165393761613339353936366662663231 Dec 16 13:00:23.645558 kernel: audit: type=1334 audit(1765890023.581:558): prog-id=170 op=LOAD Dec 16 13:00:23.581000 audit: BPF prog-id=170 op=LOAD Dec 16 13:00:23.581000 audit[3350]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3339 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.655103 kernel: audit: type=1300 audit(1765890023.581:558): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3339 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362393266353835633537393165393761613339353936366662663231 Dec 16 13:00:23.665059 kernel: audit: type=1327 audit(1765890023.581:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362393266353835633537393165393761613339353936366662663231 Dec 16 13:00:23.581000 audit: BPF prog-id=171 op=LOAD Dec 16 13:00:23.581000 audit[3350]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3339 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362393266353835633537393165393761613339353936366662663231 Dec 16 13:00:23.581000 audit: BPF prog-id=171 op=UNLOAD Dec 16 13:00:23.581000 audit[3350]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362393266353835633537393165393761613339353936366662663231 Dec 16 13:00:23.581000 audit: BPF prog-id=170 op=UNLOAD Dec 16 13:00:23.581000 audit[3350]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362393266353835633537393165393761613339353936366662663231 Dec 16 13:00:23.581000 audit: BPF prog-id=172 op=LOAD Dec 16 13:00:23.581000 audit[3350]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3339 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:23.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362393266353835633537393165393761613339353936366662663231 Dec 16 13:00:23.671986 containerd[1618]: time="2025-12-16T13:00:23.671831759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cfknv,Uid:87d4d58b-3be5-4332-8fcf-f0954ed501fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b92f585c5791e97aa395966fbf2110812a3b700120c9ad6c4d60c2fc820c953\"" Dec 16 13:00:23.675958 kubelet[2813]: E1216 13:00:23.675906 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:23.698137 kubelet[2813]: E1216 13:00:23.698093 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.698137 kubelet[2813]: W1216 13:00:23.698134 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.698137 kubelet[2813]: E1216 13:00:23.698154 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.698473 kubelet[2813]: E1216 13:00:23.698451 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.698506 kubelet[2813]: W1216 13:00:23.698478 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.698506 kubelet[2813]: E1216 13:00:23.698489 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.698834 kubelet[2813]: E1216 13:00:23.698813 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.698834 kubelet[2813]: W1216 13:00:23.698829 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.698898 kubelet[2813]: E1216 13:00:23.698838 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.699341 kubelet[2813]: E1216 13:00:23.699268 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.699341 kubelet[2813]: W1216 13:00:23.699298 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.699341 kubelet[2813]: E1216 13:00:23.699307 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.699915 kubelet[2813]: E1216 13:00:23.699871 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.699915 kubelet[2813]: W1216 13:00:23.699906 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.699915 kubelet[2813]: E1216 13:00:23.699916 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.700364 kubelet[2813]: E1216 13:00:23.700303 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.700364 kubelet[2813]: W1216 13:00:23.700314 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.700364 kubelet[2813]: E1216 13:00:23.700322 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.700618 kubelet[2813]: E1216 13:00:23.700568 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.700618 kubelet[2813]: W1216 13:00:23.700608 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.700618 kubelet[2813]: E1216 13:00:23.700617 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.700950 kubelet[2813]: E1216 13:00:23.700935 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.700950 kubelet[2813]: W1216 13:00:23.700947 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.701024 kubelet[2813]: E1216 13:00:23.700955 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.701510 kubelet[2813]: E1216 13:00:23.701478 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.701510 kubelet[2813]: W1216 13:00:23.701488 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.701510 kubelet[2813]: E1216 13:00:23.701498 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.701777 kubelet[2813]: E1216 13:00:23.701754 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.701804 kubelet[2813]: W1216 13:00:23.701769 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.701804 kubelet[2813]: E1216 13:00:23.701796 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.702370 kubelet[2813]: E1216 13:00:23.702328 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.702370 kubelet[2813]: W1216 13:00:23.702339 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.702370 kubelet[2813]: E1216 13:00:23.702347 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.702915 kubelet[2813]: E1216 13:00:23.702891 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.702915 kubelet[2813]: W1216 13:00:23.702906 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.702915 kubelet[2813]: E1216 13:00:23.702916 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.703429 kubelet[2813]: E1216 13:00:23.703404 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.703429 kubelet[2813]: W1216 13:00:23.703418 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.703429 kubelet[2813]: E1216 13:00:23.703426 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.703815 kubelet[2813]: E1216 13:00:23.703793 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.703815 kubelet[2813]: W1216 13:00:23.703804 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.703815 kubelet[2813]: E1216 13:00:23.703813 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.704215 kubelet[2813]: E1216 13:00:23.704190 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.704215 kubelet[2813]: W1216 13:00:23.704203 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.704215 kubelet[2813]: E1216 13:00:23.704211 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:23.713144 kubelet[2813]: E1216 13:00:23.713114 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:23.713144 kubelet[2813]: W1216 13:00:23.713144 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:23.713208 kubelet[2813]: E1216 13:00:23.713156 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:24.100000 audit[3419]: NETFILTER_CFG table=filter:113 family=2 entries=22 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:24.100000 audit[3419]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff8e4780a0 a2=0 a3=7fff8e47808c items=0 ppid=2965 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:24.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:24.105000 audit[3419]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:24.105000 audit[3419]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8e4780a0 a2=0 a3=0 items=0 ppid=2965 pid=3419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:24.105000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:24.448562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount636998731.mount: Deactivated successfully. Dec 16 13:00:25.057134 containerd[1618]: time="2025-12-16T13:00:25.057090013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:25.058089 containerd[1618]: time="2025-12-16T13:00:25.057933301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 13:00:25.058634 containerd[1618]: time="2025-12-16T13:00:25.058608297Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:25.060544 containerd[1618]: time="2025-12-16T13:00:25.060508064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:25.061231 containerd[1618]: time="2025-12-16T13:00:25.061209341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.498689231s" Dec 16 13:00:25.061312 containerd[1618]: time="2025-12-16T13:00:25.061296922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:00:25.062088 containerd[1618]: time="2025-12-16T13:00:25.062038988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:00:25.077403 containerd[1618]: time="2025-12-16T13:00:25.077379829Z" level=info msg="CreateContainer within sandbox \"bcda1ffa7da8f7a274d14bc33d1a6a7c4ec21f485f50aade45a26dd2bfeee712\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:00:25.082559 containerd[1618]: time="2025-12-16T13:00:25.082539036Z" level=info msg="Container d06c57e9aa229449bb5dccaa5f7bda3a93959c366d4a723904b6467352d0585f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:00:25.088325 containerd[1618]: time="2025-12-16T13:00:25.088297959Z" level=info msg="CreateContainer within sandbox \"bcda1ffa7da8f7a274d14bc33d1a6a7c4ec21f485f50aade45a26dd2bfeee712\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d06c57e9aa229449bb5dccaa5f7bda3a93959c366d4a723904b6467352d0585f\"" Dec 16 13:00:25.089066 containerd[1618]: time="2025-12-16T13:00:25.088960885Z" level=info msg="StartContainer for \"d06c57e9aa229449bb5dccaa5f7bda3a93959c366d4a723904b6467352d0585f\"" Dec 16 13:00:25.090423 containerd[1618]: time="2025-12-16T13:00:25.090367268Z" level=info msg="connecting to shim d06c57e9aa229449bb5dccaa5f7bda3a93959c366d4a723904b6467352d0585f" address="unix:///run/containerd/s/8c33cdcd4721dbb4e4e85262efa73b6a89307209789608d9baf5b4a9a442476a" protocol=ttrpc version=3 Dec 16 13:00:25.112379 systemd[1]: Started cri-containerd-d06c57e9aa229449bb5dccaa5f7bda3a93959c366d4a723904b6467352d0585f.scope - libcontainer container d06c57e9aa229449bb5dccaa5f7bda3a93959c366d4a723904b6467352d0585f. Dec 16 13:00:25.127000 audit: BPF prog-id=173 op=LOAD Dec 16 13:00:25.128000 audit: BPF prog-id=174 op=LOAD Dec 16 13:00:25.128000 audit[3430]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3259 pid=3430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:25.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430366335376539616132323934343962623564636361613566376264 Dec 16 13:00:25.128000 audit: BPF prog-id=174 op=UNLOAD Dec 16 13:00:25.128000 audit[3430]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:25.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430366335376539616132323934343962623564636361613566376264 Dec 16 13:00:25.128000 audit: BPF prog-id=175 op=LOAD Dec 16 13:00:25.128000 audit[3430]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3259 pid=3430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:25.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430366335376539616132323934343962623564636361613566376264 Dec 16 13:00:25.128000 audit: BPF prog-id=176 op=LOAD Dec 16 13:00:25.128000 audit[3430]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3259 pid=3430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:25.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430366335376539616132323934343962623564636361613566376264 Dec 16 13:00:25.128000 audit: BPF prog-id=176 op=UNLOAD Dec 16 13:00:25.128000 audit[3430]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:25.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430366335376539616132323934343962623564636361613566376264 Dec 16 13:00:25.128000 audit: BPF prog-id=175 op=UNLOAD Dec 16 13:00:25.128000 audit[3430]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3259 pid=3430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:25.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430366335376539616132323934343962623564636361613566376264 Dec 16 13:00:25.128000 audit: BPF prog-id=177 op=LOAD Dec 16 13:00:25.128000 audit[3430]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3259 pid=3430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:25.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430366335376539616132323934343962623564636361613566376264 Dec 16 13:00:25.157546 kubelet[2813]: E1216 13:00:25.157513 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:00:25.171926 containerd[1618]: time="2025-12-16T13:00:25.171877744Z" level=info msg="StartContainer for \"d06c57e9aa229449bb5dccaa5f7bda3a93959c366d4a723904b6467352d0585f\" returns successfully" Dec 16 13:00:25.248181 kubelet[2813]: E1216 13:00:25.247908 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:25.311318 kubelet[2813]: E1216 13:00:25.311212 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.311318 kubelet[2813]: W1216 13:00:25.311234 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.311318 kubelet[2813]: E1216 13:00:25.311253 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.311907 kubelet[2813]: E1216 13:00:25.311889 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.311907 kubelet[2813]: W1216 13:00:25.311904 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.311993 kubelet[2813]: E1216 13:00:25.311914 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.312960 kubelet[2813]: E1216 13:00:25.312122 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.312960 kubelet[2813]: W1216 13:00:25.312130 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.312960 kubelet[2813]: E1216 13:00:25.312138 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.313233 kubelet[2813]: E1216 13:00:25.313154 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.313233 kubelet[2813]: W1216 13:00:25.313170 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.313233 kubelet[2813]: E1216 13:00:25.313179 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.313420 kubelet[2813]: E1216 13:00:25.313404 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.313420 kubelet[2813]: W1216 13:00:25.313418 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.313483 kubelet[2813]: E1216 13:00:25.313427 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.313640 kubelet[2813]: E1216 13:00:25.313623 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.313640 kubelet[2813]: W1216 13:00:25.313636 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.313710 kubelet[2813]: E1216 13:00:25.313644 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.313857 kubelet[2813]: E1216 13:00:25.313841 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.313857 kubelet[2813]: W1216 13:00:25.313854 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.313922 kubelet[2813]: E1216 13:00:25.313861 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.314079 kubelet[2813]: E1216 13:00:25.314063 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.314079 kubelet[2813]: W1216 13:00:25.314075 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.314155 kubelet[2813]: E1216 13:00:25.314083 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.316089 kubelet[2813]: E1216 13:00:25.316070 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.316089 kubelet[2813]: W1216 13:00:25.316084 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.316089 kubelet[2813]: E1216 13:00:25.316094 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.317214 kubelet[2813]: E1216 13:00:25.317194 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.317214 kubelet[2813]: W1216 13:00:25.317209 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.317278 kubelet[2813]: E1216 13:00:25.317219 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.317433 kubelet[2813]: E1216 13:00:25.317407 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.317433 kubelet[2813]: W1216 13:00:25.317420 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.317433 kubelet[2813]: E1216 13:00:25.317428 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.317665 kubelet[2813]: E1216 13:00:25.317649 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.317665 kubelet[2813]: W1216 13:00:25.317663 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.317738 kubelet[2813]: E1216 13:00:25.317671 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.317868 kubelet[2813]: E1216 13:00:25.317852 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.317868 kubelet[2813]: W1216 13:00:25.317862 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.317868 kubelet[2813]: E1216 13:00:25.317870 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.318250 kubelet[2813]: E1216 13:00:25.318236 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.318250 kubelet[2813]: W1216 13:00:25.318247 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.318293 kubelet[2813]: E1216 13:00:25.318255 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.318435 kubelet[2813]: E1216 13:00:25.318422 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.318435 kubelet[2813]: W1216 13:00:25.318433 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.318509 kubelet[2813]: E1216 13:00:25.318440 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.318707 kubelet[2813]: E1216 13:00:25.318679 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.318707 kubelet[2813]: W1216 13:00:25.318691 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.318707 kubelet[2813]: E1216 13:00:25.318698 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.318936 kubelet[2813]: E1216 13:00:25.318921 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.318936 kubelet[2813]: W1216 13:00:25.318933 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.319026 kubelet[2813]: E1216 13:00:25.318941 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.319389 kubelet[2813]: E1216 13:00:25.319372 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.319389 kubelet[2813]: W1216 13:00:25.319384 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.319449 kubelet[2813]: E1216 13:00:25.319392 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.319677 kubelet[2813]: E1216 13:00:25.319662 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.319677 kubelet[2813]: W1216 13:00:25.319675 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.319748 kubelet[2813]: E1216 13:00:25.319683 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.319944 kubelet[2813]: E1216 13:00:25.319861 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.319944 kubelet[2813]: W1216 13:00:25.319868 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.319944 kubelet[2813]: E1216 13:00:25.319876 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.320083 kubelet[2813]: E1216 13:00:25.320072 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.320083 kubelet[2813]: W1216 13:00:25.320083 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.320234 kubelet[2813]: E1216 13:00:25.320090 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.320327 kubelet[2813]: E1216 13:00:25.320306 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.320327 kubelet[2813]: W1216 13:00:25.320318 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.320327 kubelet[2813]: E1216 13:00:25.320326 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.320630 kubelet[2813]: E1216 13:00:25.320538 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.320630 kubelet[2813]: W1216 13:00:25.320552 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.320630 kubelet[2813]: E1216 13:00:25.320593 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.320840 kubelet[2813]: E1216 13:00:25.320825 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.320840 kubelet[2813]: W1216 13:00:25.320835 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.320919 kubelet[2813]: E1216 13:00:25.320842 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.321592 kubelet[2813]: E1216 13:00:25.321467 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.321592 kubelet[2813]: W1216 13:00:25.321478 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.321592 kubelet[2813]: E1216 13:00:25.321487 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.321821 kubelet[2813]: E1216 13:00:25.321706 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.322438 kubelet[2813]: W1216 13:00:25.322419 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.322438 kubelet[2813]: E1216 13:00:25.322436 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.322763 kubelet[2813]: E1216 13:00:25.322745 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.322763 kubelet[2813]: W1216 13:00:25.322757 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.322824 kubelet[2813]: E1216 13:00:25.322766 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.323060 kubelet[2813]: E1216 13:00:25.323046 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.323060 kubelet[2813]: W1216 13:00:25.323057 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.323112 kubelet[2813]: E1216 13:00:25.323065 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.323336 kubelet[2813]: E1216 13:00:25.323314 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.323336 kubelet[2813]: W1216 13:00:25.323329 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.323336 kubelet[2813]: E1216 13:00:25.323337 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.323652 kubelet[2813]: E1216 13:00:25.323630 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.323652 kubelet[2813]: W1216 13:00:25.323645 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.323703 kubelet[2813]: E1216 13:00:25.323653 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.325259 kubelet[2813]: E1216 13:00:25.325239 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.325259 kubelet[2813]: W1216 13:00:25.325254 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.325346 kubelet[2813]: E1216 13:00:25.325263 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.325620 kubelet[2813]: E1216 13:00:25.325598 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.325620 kubelet[2813]: W1216 13:00:25.325612 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.325620 kubelet[2813]: E1216 13:00:25.325620 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:25.326877 kubelet[2813]: E1216 13:00:25.326856 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:25.326877 kubelet[2813]: W1216 13:00:25.326872 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:25.326949 kubelet[2813]: E1216 13:00:25.326882 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.248600 kubelet[2813]: I1216 13:00:26.248556 2813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:00:26.249476 kubelet[2813]: E1216 13:00:26.249346 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:26.325165 kubelet[2813]: E1216 13:00:26.325139 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.325165 kubelet[2813]: W1216 13:00:26.325157 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.325486 kubelet[2813]: E1216 13:00:26.325174 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.325558 kubelet[2813]: E1216 13:00:26.325546 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.325558 kubelet[2813]: W1216 13:00:26.325557 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.325608 kubelet[2813]: E1216 13:00:26.325565 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.325754 kubelet[2813]: E1216 13:00:26.325743 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.325754 kubelet[2813]: W1216 13:00:26.325753 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.325805 kubelet[2813]: E1216 13:00:26.325760 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.325936 kubelet[2813]: E1216 13:00:26.325924 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.325936 kubelet[2813]: W1216 13:00:26.325934 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.325982 kubelet[2813]: E1216 13:00:26.325942 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.326488 kubelet[2813]: E1216 13:00:26.326475 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.326488 kubelet[2813]: W1216 13:00:26.326486 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.326553 kubelet[2813]: E1216 13:00:26.326494 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.326685 kubelet[2813]: E1216 13:00:26.326674 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.326685 kubelet[2813]: W1216 13:00:26.326684 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.326727 kubelet[2813]: E1216 13:00:26.326691 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.326880 kubelet[2813]: E1216 13:00:26.326868 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.326880 kubelet[2813]: W1216 13:00:26.326878 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.326934 kubelet[2813]: E1216 13:00:26.326886 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.327082 kubelet[2813]: E1216 13:00:26.327071 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.327082 kubelet[2813]: W1216 13:00:26.327080 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.327127 kubelet[2813]: E1216 13:00:26.327090 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.327280 kubelet[2813]: E1216 13:00:26.327269 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.327280 kubelet[2813]: W1216 13:00:26.327279 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.327332 kubelet[2813]: E1216 13:00:26.327286 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.327478 kubelet[2813]: E1216 13:00:26.327462 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.327478 kubelet[2813]: W1216 13:00:26.327474 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.327540 kubelet[2813]: E1216 13:00:26.327485 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.327682 kubelet[2813]: E1216 13:00:26.327669 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.327682 kubelet[2813]: W1216 13:00:26.327680 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.327736 kubelet[2813]: E1216 13:00:26.327688 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.327879 kubelet[2813]: E1216 13:00:26.327865 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.327879 kubelet[2813]: W1216 13:00:26.327876 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.327955 kubelet[2813]: E1216 13:00:26.327884 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.328097 kubelet[2813]: E1216 13:00:26.328084 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.328097 kubelet[2813]: W1216 13:00:26.328095 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.328158 kubelet[2813]: E1216 13:00:26.328102 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.328292 kubelet[2813]: E1216 13:00:26.328278 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.328292 kubelet[2813]: W1216 13:00:26.328290 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.328352 kubelet[2813]: E1216 13:00:26.328299 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.328491 kubelet[2813]: E1216 13:00:26.328478 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.328491 kubelet[2813]: W1216 13:00:26.328488 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.328552 kubelet[2813]: E1216 13:00:26.328497 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.328711 kubelet[2813]: E1216 13:00:26.328699 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.328711 kubelet[2813]: W1216 13:00:26.328709 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.328756 kubelet[2813]: E1216 13:00:26.328716 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.328935 kubelet[2813]: E1216 13:00:26.328923 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.328935 kubelet[2813]: W1216 13:00:26.328933 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.328986 kubelet[2813]: E1216 13:00:26.328941 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.329184 kubelet[2813]: E1216 13:00:26.329171 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.329184 kubelet[2813]: W1216 13:00:26.329181 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.329241 kubelet[2813]: E1216 13:00:26.329189 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.329430 kubelet[2813]: E1216 13:00:26.329418 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.329430 kubelet[2813]: W1216 13:00:26.329428 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.329480 kubelet[2813]: E1216 13:00:26.329437 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.329657 kubelet[2813]: E1216 13:00:26.329645 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.329657 kubelet[2813]: W1216 13:00:26.329655 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.329710 kubelet[2813]: E1216 13:00:26.329662 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.329868 kubelet[2813]: E1216 13:00:26.329856 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.329868 kubelet[2813]: W1216 13:00:26.329866 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.329917 kubelet[2813]: E1216 13:00:26.329873 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.330145 kubelet[2813]: E1216 13:00:26.330133 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.330145 kubelet[2813]: W1216 13:00:26.330144 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.330192 kubelet[2813]: E1216 13:00:26.330152 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.330562 kubelet[2813]: E1216 13:00:26.330549 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.330562 kubelet[2813]: W1216 13:00:26.330560 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.330615 kubelet[2813]: E1216 13:00:26.330568 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.330762 kubelet[2813]: E1216 13:00:26.330750 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.330762 kubelet[2813]: W1216 13:00:26.330760 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.330806 kubelet[2813]: E1216 13:00:26.330767 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.330970 kubelet[2813]: E1216 13:00:26.330958 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.330970 kubelet[2813]: W1216 13:00:26.330968 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.331054 kubelet[2813]: E1216 13:00:26.330975 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.331207 kubelet[2813]: E1216 13:00:26.331195 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.331207 kubelet[2813]: W1216 13:00:26.331205 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.331275 kubelet[2813]: E1216 13:00:26.331214 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.331412 kubelet[2813]: E1216 13:00:26.331398 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.331412 kubelet[2813]: W1216 13:00:26.331409 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.331484 kubelet[2813]: E1216 13:00:26.331419 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.331662 kubelet[2813]: E1216 13:00:26.331649 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.331662 kubelet[2813]: W1216 13:00:26.331660 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.331712 kubelet[2813]: E1216 13:00:26.331668 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.331987 kubelet[2813]: E1216 13:00:26.331975 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.331987 kubelet[2813]: W1216 13:00:26.331986 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.332111 kubelet[2813]: E1216 13:00:26.331994 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.332245 kubelet[2813]: E1216 13:00:26.332232 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.332245 kubelet[2813]: W1216 13:00:26.332243 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.332294 kubelet[2813]: E1216 13:00:26.332252 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.332438 kubelet[2813]: E1216 13:00:26.332423 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.332438 kubelet[2813]: W1216 13:00:26.332435 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.332504 kubelet[2813]: E1216 13:00:26.332443 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.335650 kubelet[2813]: E1216 13:00:26.335628 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.335650 kubelet[2813]: W1216 13:00:26.335642 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.335732 kubelet[2813]: E1216 13:00:26.335652 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.336220 kubelet[2813]: E1216 13:00:26.336208 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:00:26.336220 kubelet[2813]: W1216 13:00:26.336218 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:00:26.336283 kubelet[2813]: E1216 13:00:26.336227 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:00:26.454057 containerd[1618]: time="2025-12-16T13:00:26.453994498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:26.454864 containerd[1618]: time="2025-12-16T13:00:26.454617384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:26.455204 containerd[1618]: time="2025-12-16T13:00:26.455172828Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:26.457365 containerd[1618]: time="2025-12-16T13:00:26.457341387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:26.457879 containerd[1618]: time="2025-12-16T13:00:26.457855251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.395792962s" Dec 16 13:00:26.457941 containerd[1618]: time="2025-12-16T13:00:26.457881402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:00:26.461049 containerd[1618]: time="2025-12-16T13:00:26.460956158Z" level=info msg="CreateContainer within sandbox \"3b92f585c5791e97aa395966fbf2110812a3b700120c9ad6c4d60c2fc820c953\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:00:26.469255 containerd[1618]: time="2025-12-16T13:00:26.469231459Z" level=info msg="Container 13bafafc86c65ef248660912e26b2d78127355f456bd0ef457d9f5ef81870b98: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:00:26.472851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2267347432.mount: Deactivated successfully. Dec 16 13:00:26.478932 containerd[1618]: time="2025-12-16T13:00:26.478892312Z" level=info msg="CreateContainer within sandbox \"3b92f585c5791e97aa395966fbf2110812a3b700120c9ad6c4d60c2fc820c953\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"13bafafc86c65ef248660912e26b2d78127355f456bd0ef457d9f5ef81870b98\"" Dec 16 13:00:26.480587 containerd[1618]: time="2025-12-16T13:00:26.479548638Z" level=info msg="StartContainer for \"13bafafc86c65ef248660912e26b2d78127355f456bd0ef457d9f5ef81870b98\"" Dec 16 13:00:26.482141 containerd[1618]: time="2025-12-16T13:00:26.482064909Z" level=info msg="connecting to shim 13bafafc86c65ef248660912e26b2d78127355f456bd0ef457d9f5ef81870b98" address="unix:///run/containerd/s/d4396dc045f18efd3348b72759840fecd005b361a729f2c539fa7d884d8c656b" protocol=ttrpc version=3 Dec 16 13:00:26.505342 systemd[1]: Started cri-containerd-13bafafc86c65ef248660912e26b2d78127355f456bd0ef457d9f5ef81870b98.scope - libcontainer container 13bafafc86c65ef248660912e26b2d78127355f456bd0ef457d9f5ef81870b98. Dec 16 13:00:26.558000 audit: BPF prog-id=178 op=LOAD Dec 16 13:00:26.558000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3339 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:26.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133626166616663383663363565663234383636303931326532366232 Dec 16 13:00:26.558000 audit: BPF prog-id=179 op=LOAD Dec 16 13:00:26.558000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3339 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:26.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133626166616663383663363565663234383636303931326532366232 Dec 16 13:00:26.558000 audit: BPF prog-id=179 op=UNLOAD Dec 16 13:00:26.558000 audit[3539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:26.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133626166616663383663363565663234383636303931326532366232 Dec 16 13:00:26.558000 audit: BPF prog-id=178 op=UNLOAD Dec 16 13:00:26.558000 audit[3539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:26.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133626166616663383663363565663234383636303931326532366232 Dec 16 13:00:26.558000 audit: BPF prog-id=180 op=LOAD Dec 16 13:00:26.558000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3339 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:26.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133626166616663383663363565663234383636303931326532366232 Dec 16 13:00:26.579381 containerd[1618]: time="2025-12-16T13:00:26.579339634Z" level=info msg="StartContainer for \"13bafafc86c65ef248660912e26b2d78127355f456bd0ef457d9f5ef81870b98\" returns successfully" Dec 16 13:00:26.596816 systemd[1]: cri-containerd-13bafafc86c65ef248660912e26b2d78127355f456bd0ef457d9f5ef81870b98.scope: Deactivated successfully. Dec 16 13:00:26.602000 audit: BPF prog-id=180 op=UNLOAD Dec 16 13:00:26.606869 containerd[1618]: time="2025-12-16T13:00:26.606833211Z" level=info msg="received container exit event container_id:\"13bafafc86c65ef248660912e26b2d78127355f456bd0ef457d9f5ef81870b98\" id:\"13bafafc86c65ef248660912e26b2d78127355f456bd0ef457d9f5ef81870b98\" pid:3551 exited_at:{seconds:1765890026 nanos:604992555}" Dec 16 13:00:26.636886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13bafafc86c65ef248660912e26b2d78127355f456bd0ef457d9f5ef81870b98-rootfs.mount: Deactivated successfully. Dec 16 13:00:27.156607 kubelet[2813]: E1216 13:00:27.156553 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:00:27.252709 kubelet[2813]: E1216 13:00:27.252348 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:27.253726 containerd[1618]: time="2025-12-16T13:00:27.253306356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:00:27.266355 kubelet[2813]: I1216 13:00:27.266272 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69755b49f6-tmn5c" podStartSLOduration=2.766031884 podStartE2EDuration="4.26624079s" podCreationTimestamp="2025-12-16 13:00:23 +0000 UTC" firstStartedPulling="2025-12-16 13:00:23.561734592 +0000 UTC m=+21.508159075" lastFinishedPulling="2025-12-16 13:00:25.061943488 +0000 UTC m=+23.008367981" observedRunningTime="2025-12-16 13:00:25.263844167 +0000 UTC m=+23.210268650" watchObservedRunningTime="2025-12-16 13:00:27.26624079 +0000 UTC m=+25.212665273" Dec 16 13:00:29.094099 containerd[1618]: time="2025-12-16T13:00:29.093989360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:29.094944 containerd[1618]: time="2025-12-16T13:00:29.094781786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 13:00:29.095427 containerd[1618]: time="2025-12-16T13:00:29.095397790Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:29.097000 containerd[1618]: time="2025-12-16T13:00:29.096974071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:29.097546 containerd[1618]: time="2025-12-16T13:00:29.097485915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.844123748s" Dec 16 13:00:29.097546 containerd[1618]: time="2025-12-16T13:00:29.097516005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:00:29.103030 containerd[1618]: time="2025-12-16T13:00:29.102726292Z" level=info msg="CreateContainer within sandbox \"3b92f585c5791e97aa395966fbf2110812a3b700120c9ad6c4d60c2fc820c953\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:00:29.112671 containerd[1618]: time="2025-12-16T13:00:29.111352353Z" level=info msg="Container b850d5a6f4a408c3007809dd28c8e5b5663aa56c945698f46f8ca5b344e0cf3a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:00:29.123391 containerd[1618]: time="2025-12-16T13:00:29.123306807Z" level=info msg="CreateContainer within sandbox \"3b92f585c5791e97aa395966fbf2110812a3b700120c9ad6c4d60c2fc820c953\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b850d5a6f4a408c3007809dd28c8e5b5663aa56c945698f46f8ca5b344e0cf3a\"" Dec 16 13:00:29.123963 containerd[1618]: time="2025-12-16T13:00:29.123923802Z" level=info msg="StartContainer for \"b850d5a6f4a408c3007809dd28c8e5b5663aa56c945698f46f8ca5b344e0cf3a\"" Dec 16 13:00:29.125575 containerd[1618]: time="2025-12-16T13:00:29.125545353Z" level=info msg="connecting to shim b850d5a6f4a408c3007809dd28c8e5b5663aa56c945698f46f8ca5b344e0cf3a" address="unix:///run/containerd/s/d4396dc045f18efd3348b72759840fecd005b361a729f2c539fa7d884d8c656b" protocol=ttrpc version=3 Dec 16 13:00:29.156899 kubelet[2813]: E1216 13:00:29.156866 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:00:29.157307 systemd[1]: Started cri-containerd-b850d5a6f4a408c3007809dd28c8e5b5663aa56c945698f46f8ca5b344e0cf3a.scope - libcontainer container b850d5a6f4a408c3007809dd28c8e5b5663aa56c945698f46f8ca5b344e0cf3a. Dec 16 13:00:29.209000 audit: BPF prog-id=181 op=LOAD Dec 16 13:00:29.211229 kernel: kauditd_printk_skb: 56 callbacks suppressed Dec 16 13:00:29.211490 kernel: audit: type=1334 audit(1765890029.209:579): prog-id=181 op=LOAD Dec 16 13:00:29.209000 audit[3598]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3339 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:29.216459 kernel: audit: type=1300 audit(1765890029.209:579): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3339 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:29.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238353064356136663461343038633330303738303964643238633865 Dec 16 13:00:29.224611 kernel: audit: type=1327 audit(1765890029.209:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238353064356136663461343038633330303738303964643238633865 Dec 16 13:00:29.209000 audit: BPF prog-id=182 op=LOAD Dec 16 13:00:29.231115 kernel: audit: type=1334 audit(1765890029.209:580): prog-id=182 op=LOAD Dec 16 13:00:29.209000 audit[3598]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3339 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:29.234294 kernel: audit: type=1300 audit(1765890029.209:580): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3339 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:29.242838 kernel: audit: type=1327 audit(1765890029.209:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238353064356136663461343038633330303738303964643238633865 Dec 16 13:00:29.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238353064356136663461343038633330303738303964643238633865 Dec 16 13:00:29.250635 kernel: audit: type=1334 audit(1765890029.209:581): prog-id=182 op=UNLOAD Dec 16 13:00:29.209000 audit: BPF prog-id=182 op=UNLOAD Dec 16 13:00:29.255290 kernel: audit: type=1300 audit(1765890029.209:581): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:29.209000 audit[3598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:29.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238353064356136663461343038633330303738303964643238633865 Dec 16 13:00:29.269755 kernel: audit: type=1327 audit(1765890029.209:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238353064356136663461343038633330303738303964643238633865 Dec 16 13:00:29.269806 kernel: audit: type=1334 audit(1765890029.209:582): prog-id=181 op=UNLOAD Dec 16 13:00:29.209000 audit: BPF prog-id=181 op=UNLOAD Dec 16 13:00:29.209000 audit[3598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:29.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238353064356136663461343038633330303738303964643238633865 Dec 16 13:00:29.209000 audit: BPF prog-id=183 op=LOAD Dec 16 13:00:29.209000 audit[3598]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3339 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:29.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238353064356136663461343038633330303738303964643238633865 Dec 16 13:00:29.274087 containerd[1618]: time="2025-12-16T13:00:29.274051424Z" level=info msg="StartContainer for \"b850d5a6f4a408c3007809dd28c8e5b5663aa56c945698f46f8ca5b344e0cf3a\" returns successfully" Dec 16 13:00:29.789603 containerd[1618]: time="2025-12-16T13:00:29.789559822Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:00:29.793888 systemd[1]: cri-containerd-b850d5a6f4a408c3007809dd28c8e5b5663aa56c945698f46f8ca5b344e0cf3a.scope: Deactivated successfully. Dec 16 13:00:29.794264 systemd[1]: cri-containerd-b850d5a6f4a408c3007809dd28c8e5b5663aa56c945698f46f8ca5b344e0cf3a.scope: Consumed 559ms CPU time, 192.1M memory peak, 171.3M written to disk. Dec 16 13:00:29.797371 containerd[1618]: time="2025-12-16T13:00:29.797047684Z" level=info msg="received container exit event container_id:\"b850d5a6f4a408c3007809dd28c8e5b5663aa56c945698f46f8ca5b344e0cf3a\" id:\"b850d5a6f4a408c3007809dd28c8e5b5663aa56c945698f46f8ca5b344e0cf3a\" pid:3611 exited_at:{seconds:1765890029 nanos:796679532}" Dec 16 13:00:29.797000 audit: BPF prog-id=183 op=UNLOAD Dec 16 13:00:29.821575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b850d5a6f4a408c3007809dd28c8e5b5663aa56c945698f46f8ca5b344e0cf3a-rootfs.mount: Deactivated successfully. Dec 16 13:00:29.878567 kubelet[2813]: I1216 13:00:29.878529 2813 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 13:00:29.915806 systemd[1]: Created slice kubepods-burstable-podc15e2c6a_d918_4414_bed7_25c87abcbc42.slice - libcontainer container kubepods-burstable-podc15e2c6a_d918_4414_bed7_25c87abcbc42.slice. Dec 16 13:00:29.928985 systemd[1]: Created slice kubepods-besteffort-pod7a67396c_f7d6_4b07_9621_f2a99ef21577.slice - libcontainer container kubepods-besteffort-pod7a67396c_f7d6_4b07_9621_f2a99ef21577.slice. Dec 16 13:00:29.937423 systemd[1]: Created slice kubepods-besteffort-pode3fb7e1f_3535_4337_a579_ff59dbec66d0.slice - libcontainer container kubepods-besteffort-pode3fb7e1f_3535_4337_a579_ff59dbec66d0.slice. Dec 16 13:00:29.946378 systemd[1]: Created slice kubepods-besteffort-pod3c1f37f8_b232_4ab7_9b50_17ad83754886.slice - libcontainer container kubepods-besteffort-pod3c1f37f8_b232_4ab7_9b50_17ad83754886.slice. Dec 16 13:00:29.956027 systemd[1]: Created slice kubepods-burstable-poded7f4f33_5b34_46d1_90ea_2333fc7431f2.slice - libcontainer container kubepods-burstable-poded7f4f33_5b34_46d1_90ea_2333fc7431f2.slice. Dec 16 13:00:29.963539 kubelet[2813]: I1216 13:00:29.963516 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a67396c-f7d6-4b07-9621-f2a99ef21577-tigera-ca-bundle\") pod \"calico-kube-controllers-579dd74c6f-9xbhn\" (UID: \"7a67396c-f7d6-4b07-9621-f2a99ef21577\") " pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" Dec 16 13:00:29.963750 kubelet[2813]: I1216 13:00:29.963720 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1ac913f-bfd6-4d60-abaa-3d193db00d41-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-n2rhc\" (UID: \"d1ac913f-bfd6-4d60-abaa-3d193db00d41\") " pod="calico-system/goldmane-7c778bb748-n2rhc" Dec 16 13:00:29.964107 kubelet[2813]: I1216 13:00:29.964047 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqw97\" (UniqueName: \"kubernetes.io/projected/d1ac913f-bfd6-4d60-abaa-3d193db00d41-kube-api-access-dqw97\") pod \"goldmane-7c778bb748-n2rhc\" (UID: \"d1ac913f-bfd6-4d60-abaa-3d193db00d41\") " pod="calico-system/goldmane-7c778bb748-n2rhc" Dec 16 13:00:29.964107 kubelet[2813]: I1216 13:00:29.964072 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/abd9c463-d326-4641-8a59-5c99d6568321-whisker-backend-key-pair\") pod \"whisker-96b6f64c4-rr4hp\" (UID: \"abd9c463-d326-4641-8a59-5c99d6568321\") " pod="calico-system/whisker-96b6f64c4-rr4hp" Dec 16 13:00:29.964652 systemd[1]: Created slice kubepods-besteffort-podd1ac913f_bfd6_4d60_abaa_3d193db00d41.slice - libcontainer container kubepods-besteffort-podd1ac913f_bfd6_4d60_abaa_3d193db00d41.slice. Dec 16 13:00:29.965649 kubelet[2813]: I1216 13:00:29.964092 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6h2d\" (UniqueName: \"kubernetes.io/projected/c15e2c6a-d918-4414-bed7-25c87abcbc42-kube-api-access-n6h2d\") pod \"coredns-66bc5c9577-4w8pf\" (UID: \"c15e2c6a-d918-4414-bed7-25c87abcbc42\") " pod="kube-system/coredns-66bc5c9577-4w8pf" Dec 16 13:00:29.965649 kubelet[2813]: I1216 13:00:29.964702 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d1ac913f-bfd6-4d60-abaa-3d193db00d41-goldmane-key-pair\") pod \"goldmane-7c778bb748-n2rhc\" (UID: \"d1ac913f-bfd6-4d60-abaa-3d193db00d41\") " pod="calico-system/goldmane-7c778bb748-n2rhc" Dec 16 13:00:29.965649 kubelet[2813]: I1216 13:00:29.964718 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vvql\" (UniqueName: \"kubernetes.io/projected/abd9c463-d326-4641-8a59-5c99d6568321-kube-api-access-4vvql\") pod \"whisker-96b6f64c4-rr4hp\" (UID: \"abd9c463-d326-4641-8a59-5c99d6568321\") " pod="calico-system/whisker-96b6f64c4-rr4hp" Dec 16 13:00:29.965951 kubelet[2813]: I1216 13:00:29.965783 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbq4q\" (UniqueName: \"kubernetes.io/projected/3c1f37f8-b232-4ab7-9b50-17ad83754886-kube-api-access-bbq4q\") pod \"calico-apiserver-559c85bdcd-glvwq\" (UID: \"3c1f37f8-b232-4ab7-9b50-17ad83754886\") " pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" Dec 16 13:00:29.965951 kubelet[2813]: I1216 13:00:29.965898 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml4nn\" (UniqueName: \"kubernetes.io/projected/e3fb7e1f-3535-4337-a579-ff59dbec66d0-kube-api-access-ml4nn\") pod \"calico-apiserver-559c85bdcd-wnq6t\" (UID: \"e3fb7e1f-3535-4337-a579-ff59dbec66d0\") " pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" Dec 16 13:00:29.965951 kubelet[2813]: I1216 13:00:29.965919 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1ac913f-bfd6-4d60-abaa-3d193db00d41-config\") pod \"goldmane-7c778bb748-n2rhc\" (UID: \"d1ac913f-bfd6-4d60-abaa-3d193db00d41\") " pod="calico-system/goldmane-7c778bb748-n2rhc" Dec 16 13:00:29.966191 kubelet[2813]: I1216 13:00:29.966124 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8j8d\" (UniqueName: \"kubernetes.io/projected/ed7f4f33-5b34-46d1-90ea-2333fc7431f2-kube-api-access-g8j8d\") pod \"coredns-66bc5c9577-cwnsn\" (UID: \"ed7f4f33-5b34-46d1-90ea-2333fc7431f2\") " pod="kube-system/coredns-66bc5c9577-cwnsn" Dec 16 13:00:29.966275 kubelet[2813]: I1216 13:00:29.966262 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3c1f37f8-b232-4ab7-9b50-17ad83754886-calico-apiserver-certs\") pod \"calico-apiserver-559c85bdcd-glvwq\" (UID: \"3c1f37f8-b232-4ab7-9b50-17ad83754886\") " pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" Dec 16 13:00:29.966403 kubelet[2813]: I1216 13:00:29.966340 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c15e2c6a-d918-4414-bed7-25c87abcbc42-config-volume\") pod \"coredns-66bc5c9577-4w8pf\" (UID: \"c15e2c6a-d918-4414-bed7-25c87abcbc42\") " pod="kube-system/coredns-66bc5c9577-4w8pf" Dec 16 13:00:29.966403 kubelet[2813]: I1216 13:00:29.966362 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brzbn\" (UniqueName: \"kubernetes.io/projected/7a67396c-f7d6-4b07-9621-f2a99ef21577-kube-api-access-brzbn\") pod \"calico-kube-controllers-579dd74c6f-9xbhn\" (UID: \"7a67396c-f7d6-4b07-9621-f2a99ef21577\") " pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" Dec 16 13:00:29.966567 kubelet[2813]: I1216 13:00:29.966378 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd9c463-d326-4641-8a59-5c99d6568321-whisker-ca-bundle\") pod \"whisker-96b6f64c4-rr4hp\" (UID: \"abd9c463-d326-4641-8a59-5c99d6568321\") " pod="calico-system/whisker-96b6f64c4-rr4hp" Dec 16 13:00:29.968645 kubelet[2813]: I1216 13:00:29.968608 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e3fb7e1f-3535-4337-a579-ff59dbec66d0-calico-apiserver-certs\") pod \"calico-apiserver-559c85bdcd-wnq6t\" (UID: \"e3fb7e1f-3535-4337-a579-ff59dbec66d0\") " pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" Dec 16 13:00:29.968715 kubelet[2813]: I1216 13:00:29.968701 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed7f4f33-5b34-46d1-90ea-2333fc7431f2-config-volume\") pod \"coredns-66bc5c9577-cwnsn\" (UID: \"ed7f4f33-5b34-46d1-90ea-2333fc7431f2\") " pod="kube-system/coredns-66bc5c9577-cwnsn" Dec 16 13:00:29.977825 systemd[1]: Created slice kubepods-besteffort-podabd9c463_d326_4641_8a59_5c99d6568321.slice - libcontainer container kubepods-besteffort-podabd9c463_d326_4641_8a59_5c99d6568321.slice. Dec 16 13:00:30.225561 kubelet[2813]: E1216 13:00:30.225512 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:30.228110 containerd[1618]: time="2025-12-16T13:00:30.226248261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4w8pf,Uid:c15e2c6a-d918-4414-bed7-25c87abcbc42,Namespace:kube-system,Attempt:0,}" Dec 16 13:00:30.239726 containerd[1618]: time="2025-12-16T13:00:30.239684350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-579dd74c6f-9xbhn,Uid:7a67396c-f7d6-4b07-9621-f2a99ef21577,Namespace:calico-system,Attempt:0,}" Dec 16 13:00:30.253909 containerd[1618]: time="2025-12-16T13:00:30.253866965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559c85bdcd-glvwq,Uid:3c1f37f8-b232-4ab7-9b50-17ad83754886,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:00:30.254821 containerd[1618]: time="2025-12-16T13:00:30.254775571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559c85bdcd-wnq6t,Uid:e3fb7e1f-3535-4337-a579-ff59dbec66d0,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:00:30.276852 kubelet[2813]: E1216 13:00:30.275646 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:30.276852 kubelet[2813]: E1216 13:00:30.276220 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:30.278996 containerd[1618]: time="2025-12-16T13:00:30.278974791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:00:30.280223 containerd[1618]: time="2025-12-16T13:00:30.280201369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-n2rhc,Uid:d1ac913f-bfd6-4d60-abaa-3d193db00d41,Namespace:calico-system,Attempt:0,}" Dec 16 13:00:30.280383 containerd[1618]: time="2025-12-16T13:00:30.280366030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cwnsn,Uid:ed7f4f33-5b34-46d1-90ea-2333fc7431f2,Namespace:kube-system,Attempt:0,}" Dec 16 13:00:30.284932 containerd[1618]: time="2025-12-16T13:00:30.284910480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-96b6f64c4-rr4hp,Uid:abd9c463-d326-4641-8a59-5c99d6568321,Namespace:calico-system,Attempt:0,}" Dec 16 13:00:30.356104 containerd[1618]: time="2025-12-16T13:00:30.356055172Z" level=error msg="Failed to destroy network for sandbox \"87a52351107b38cedfe6b3ddbda545595603cb2c44c7ec3530e51764abb611ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.360533 containerd[1618]: time="2025-12-16T13:00:30.360486462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4w8pf,Uid:c15e2c6a-d918-4414-bed7-25c87abcbc42,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a52351107b38cedfe6b3ddbda545595603cb2c44c7ec3530e51764abb611ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.360769 kubelet[2813]: E1216 13:00:30.360727 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a52351107b38cedfe6b3ddbda545595603cb2c44c7ec3530e51764abb611ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.360839 kubelet[2813]: E1216 13:00:30.360784 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a52351107b38cedfe6b3ddbda545595603cb2c44c7ec3530e51764abb611ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4w8pf" Dec 16 13:00:30.360883 kubelet[2813]: E1216 13:00:30.360839 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87a52351107b38cedfe6b3ddbda545595603cb2c44c7ec3530e51764abb611ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4w8pf" Dec 16 13:00:30.360945 kubelet[2813]: E1216 13:00:30.360914 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4w8pf_kube-system(c15e2c6a-d918-4414-bed7-25c87abcbc42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4w8pf_kube-system(c15e2c6a-d918-4414-bed7-25c87abcbc42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87a52351107b38cedfe6b3ddbda545595603cb2c44c7ec3530e51764abb611ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4w8pf" podUID="c15e2c6a-d918-4414-bed7-25c87abcbc42" Dec 16 13:00:30.419293 containerd[1618]: time="2025-12-16T13:00:30.419234822Z" level=error msg="Failed to destroy network for sandbox \"ccbb678e182cb577d955ab5a841dcdac03e5da980686bc6a91dda4e59ad7f02e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.421620 containerd[1618]: time="2025-12-16T13:00:30.421578787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-579dd74c6f-9xbhn,Uid:7a67396c-f7d6-4b07-9621-f2a99ef21577,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccbb678e182cb577d955ab5a841dcdac03e5da980686bc6a91dda4e59ad7f02e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.421882 kubelet[2813]: E1216 13:00:30.421841 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccbb678e182cb577d955ab5a841dcdac03e5da980686bc6a91dda4e59ad7f02e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.421959 kubelet[2813]: E1216 13:00:30.421896 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccbb678e182cb577d955ab5a841dcdac03e5da980686bc6a91dda4e59ad7f02e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" Dec 16 13:00:30.421959 kubelet[2813]: E1216 13:00:30.421916 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccbb678e182cb577d955ab5a841dcdac03e5da980686bc6a91dda4e59ad7f02e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" Dec 16 13:00:30.422028 kubelet[2813]: E1216 13:00:30.421961 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-579dd74c6f-9xbhn_calico-system(7a67396c-f7d6-4b07-9621-f2a99ef21577)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-579dd74c6f-9xbhn_calico-system(7a67396c-f7d6-4b07-9621-f2a99ef21577)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccbb678e182cb577d955ab5a841dcdac03e5da980686bc6a91dda4e59ad7f02e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:00:30.448027 containerd[1618]: time="2025-12-16T13:00:30.447955062Z" level=error msg="Failed to destroy network for sandbox \"96832e4f54c47f7409bc94809f7b49e87cbad3d922687e034f7e0cd599b5ed8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.466990 containerd[1618]: time="2025-12-16T13:00:30.466673216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559c85bdcd-wnq6t,Uid:e3fb7e1f-3535-4337-a579-ff59dbec66d0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96832e4f54c47f7409bc94809f7b49e87cbad3d922687e034f7e0cd599b5ed8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.468918 kubelet[2813]: E1216 13:00:30.468587 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96832e4f54c47f7409bc94809f7b49e87cbad3d922687e034f7e0cd599b5ed8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.468918 kubelet[2813]: E1216 13:00:30.468655 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96832e4f54c47f7409bc94809f7b49e87cbad3d922687e034f7e0cd599b5ed8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" Dec 16 13:00:30.468918 kubelet[2813]: E1216 13:00:30.468673 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96832e4f54c47f7409bc94809f7b49e87cbad3d922687e034f7e0cd599b5ed8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" Dec 16 13:00:30.469109 kubelet[2813]: E1216 13:00:30.468716 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-559c85bdcd-wnq6t_calico-apiserver(e3fb7e1f-3535-4337-a579-ff59dbec66d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-559c85bdcd-wnq6t_calico-apiserver(e3fb7e1f-3535-4337-a579-ff59dbec66d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96832e4f54c47f7409bc94809f7b49e87cbad3d922687e034f7e0cd599b5ed8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:00:30.476750 containerd[1618]: time="2025-12-16T13:00:30.476199289Z" level=error msg="Failed to destroy network for sandbox \"ba16c9ca960d5e8dc2d76c515757068a911d70ffc03224f822f55b2f10ad31a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.479054 containerd[1618]: time="2025-12-16T13:00:30.478944078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559c85bdcd-glvwq,Uid:3c1f37f8-b232-4ab7-9b50-17ad83754886,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba16c9ca960d5e8dc2d76c515757068a911d70ffc03224f822f55b2f10ad31a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.479310 kubelet[2813]: E1216 13:00:30.479276 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba16c9ca960d5e8dc2d76c515757068a911d70ffc03224f822f55b2f10ad31a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.479431 kubelet[2813]: E1216 13:00:30.479318 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba16c9ca960d5e8dc2d76c515757068a911d70ffc03224f822f55b2f10ad31a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" Dec 16 13:00:30.479431 kubelet[2813]: E1216 13:00:30.479337 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba16c9ca960d5e8dc2d76c515757068a911d70ffc03224f822f55b2f10ad31a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" Dec 16 13:00:30.479431 kubelet[2813]: E1216 13:00:30.479374 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-559c85bdcd-glvwq_calico-apiserver(3c1f37f8-b232-4ab7-9b50-17ad83754886)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-559c85bdcd-glvwq_calico-apiserver(3c1f37f8-b232-4ab7-9b50-17ad83754886)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba16c9ca960d5e8dc2d76c515757068a911d70ffc03224f822f55b2f10ad31a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:00:30.494107 containerd[1618]: time="2025-12-16T13:00:30.494059338Z" level=error msg="Failed to destroy network for sandbox \"d7928fdd9d82c682087441ca9cd068623d350c0d386b0f18c1e3398f6d30e290\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.494596 containerd[1618]: time="2025-12-16T13:00:30.494534281Z" level=error msg="Failed to destroy network for sandbox \"672626ca755ae4a8d7bc52f33aeb2b973736d87aedfd1c6e319e2e3c80ac68ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.497482 containerd[1618]: time="2025-12-16T13:00:30.497446400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cwnsn,Uid:ed7f4f33-5b34-46d1-90ea-2333fc7431f2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7928fdd9d82c682087441ca9cd068623d350c0d386b0f18c1e3398f6d30e290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.497713 kubelet[2813]: E1216 13:00:30.497643 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7928fdd9d82c682087441ca9cd068623d350c0d386b0f18c1e3398f6d30e290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.497713 kubelet[2813]: E1216 13:00:30.497710 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7928fdd9d82c682087441ca9cd068623d350c0d386b0f18c1e3398f6d30e290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cwnsn" Dec 16 13:00:30.497796 kubelet[2813]: E1216 13:00:30.497727 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7928fdd9d82c682087441ca9cd068623d350c0d386b0f18c1e3398f6d30e290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cwnsn" Dec 16 13:00:30.497824 kubelet[2813]: E1216 13:00:30.497792 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-cwnsn_kube-system(ed7f4f33-5b34-46d1-90ea-2333fc7431f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-cwnsn_kube-system(ed7f4f33-5b34-46d1-90ea-2333fc7431f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7928fdd9d82c682087441ca9cd068623d350c0d386b0f18c1e3398f6d30e290\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-cwnsn" podUID="ed7f4f33-5b34-46d1-90ea-2333fc7431f2" Dec 16 13:00:30.499284 containerd[1618]: time="2025-12-16T13:00:30.499199222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-n2rhc,Uid:d1ac913f-bfd6-4d60-abaa-3d193db00d41,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"672626ca755ae4a8d7bc52f33aeb2b973736d87aedfd1c6e319e2e3c80ac68ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.499590 kubelet[2813]: E1216 13:00:30.499557 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"672626ca755ae4a8d7bc52f33aeb2b973736d87aedfd1c6e319e2e3c80ac68ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.499737 kubelet[2813]: E1216 13:00:30.499695 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"672626ca755ae4a8d7bc52f33aeb2b973736d87aedfd1c6e319e2e3c80ac68ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-n2rhc" Dec 16 13:00:30.499812 kubelet[2813]: E1216 13:00:30.499791 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"672626ca755ae4a8d7bc52f33aeb2b973736d87aedfd1c6e319e2e3c80ac68ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-n2rhc" Dec 16 13:00:30.499985 kubelet[2813]: E1216 13:00:30.499958 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-n2rhc_calico-system(d1ac913f-bfd6-4d60-abaa-3d193db00d41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-n2rhc_calico-system(d1ac913f-bfd6-4d60-abaa-3d193db00d41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"672626ca755ae4a8d7bc52f33aeb2b973736d87aedfd1c6e319e2e3c80ac68ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:00:30.505691 containerd[1618]: time="2025-12-16T13:00:30.505636395Z" level=error msg="Failed to destroy network for sandbox \"9a489ea84afd0a8dc1bcf51800b7f8862bab7733da23ff766c4ff3fdfbdb796c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.506984 containerd[1618]: time="2025-12-16T13:00:30.506940443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-96b6f64c4-rr4hp,Uid:abd9c463-d326-4641-8a59-5c99d6568321,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a489ea84afd0a8dc1bcf51800b7f8862bab7733da23ff766c4ff3fdfbdb796c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.507550 kubelet[2813]: E1216 13:00:30.507489 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a489ea84afd0a8dc1bcf51800b7f8862bab7733da23ff766c4ff3fdfbdb796c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:30.507550 kubelet[2813]: E1216 13:00:30.507522 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a489ea84afd0a8dc1bcf51800b7f8862bab7733da23ff766c4ff3fdfbdb796c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-96b6f64c4-rr4hp" Dec 16 13:00:30.507676 kubelet[2813]: E1216 13:00:30.507649 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a489ea84afd0a8dc1bcf51800b7f8862bab7733da23ff766c4ff3fdfbdb796c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-96b6f64c4-rr4hp" Dec 16 13:00:30.507799 kubelet[2813]: E1216 13:00:30.507704 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-96b6f64c4-rr4hp_calico-system(abd9c463-d326-4641-8a59-5c99d6568321)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-96b6f64c4-rr4hp_calico-system(abd9c463-d326-4641-8a59-5c99d6568321)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a489ea84afd0a8dc1bcf51800b7f8862bab7733da23ff766c4ff3fdfbdb796c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-96b6f64c4-rr4hp" podUID="abd9c463-d326-4641-8a59-5c99d6568321" Dec 16 13:00:31.112435 systemd[1]: run-netns-cni\x2d4fb84513\x2df4f8\x2d40bb\x2dd9ce\x2d3e172db52bff.mount: Deactivated successfully. Dec 16 13:00:31.112597 systemd[1]: run-netns-cni\x2dfae3418c\x2d2616\x2d8e7a\x2d8371\x2dcab9244e2ba4.mount: Deactivated successfully. Dec 16 13:00:31.112721 systemd[1]: run-netns-cni\x2d5b84ad7c\x2d03a4\x2de88f\x2d2209\x2db34fbb3a2160.mount: Deactivated successfully. Dec 16 13:00:31.164756 systemd[1]: Created slice kubepods-besteffort-pod1c658f10_e923_42a5_b425_72ee5f2a64c8.slice - libcontainer container kubepods-besteffort-pod1c658f10_e923_42a5_b425_72ee5f2a64c8.slice. Dec 16 13:00:31.171602 containerd[1618]: time="2025-12-16T13:00:31.171526301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8llz,Uid:1c658f10-e923-42a5-b425-72ee5f2a64c8,Namespace:calico-system,Attempt:0,}" Dec 16 13:00:31.249592 containerd[1618]: time="2025-12-16T13:00:31.249499886Z" level=error msg="Failed to destroy network for sandbox \"1b2894bfb1b9d14ed5db1ba668c87c3e8fbb6235fff758910a4595fd9f6df636\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:31.253878 containerd[1618]: time="2025-12-16T13:00:31.253780453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8llz,Uid:1c658f10-e923-42a5-b425-72ee5f2a64c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b2894bfb1b9d14ed5db1ba668c87c3e8fbb6235fff758910a4595fd9f6df636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:31.254406 kubelet[2813]: E1216 13:00:31.254336 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b2894bfb1b9d14ed5db1ba668c87c3e8fbb6235fff758910a4595fd9f6df636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:00:31.254686 systemd[1]: run-netns-cni\x2dd99a456b\x2d02c5\x2da6d7\x2d5a55\x2da57d41029b3d.mount: Deactivated successfully. Dec 16 13:00:31.255889 kubelet[2813]: E1216 13:00:31.254775 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b2894bfb1b9d14ed5db1ba668c87c3e8fbb6235fff758910a4595fd9f6df636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n8llz" Dec 16 13:00:31.255889 kubelet[2813]: E1216 13:00:31.254802 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b2894bfb1b9d14ed5db1ba668c87c3e8fbb6235fff758910a4595fd9f6df636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n8llz" Dec 16 13:00:31.255889 kubelet[2813]: E1216 13:00:31.254867 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n8llz_calico-system(1c658f10-e923-42a5-b425-72ee5f2a64c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n8llz_calico-system(1c658f10-e923-42a5-b425-72ee5f2a64c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b2894bfb1b9d14ed5db1ba668c87c3e8fbb6235fff758910a4595fd9f6df636\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:00:34.365339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445564036.mount: Deactivated successfully. Dec 16 13:00:34.391228 containerd[1618]: time="2025-12-16T13:00:34.391185304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:34.392417 containerd[1618]: time="2025-12-16T13:00:34.392391260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 13:00:34.393402 containerd[1618]: time="2025-12-16T13:00:34.393354275Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:34.395549 containerd[1618]: time="2025-12-16T13:00:34.395526516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:00:34.395996 containerd[1618]: time="2025-12-16T13:00:34.395880068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.11582992s" Dec 16 13:00:34.395996 containerd[1618]: time="2025-12-16T13:00:34.395906508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:00:34.407775 containerd[1618]: time="2025-12-16T13:00:34.407749439Z" level=info msg="CreateContainer within sandbox \"3b92f585c5791e97aa395966fbf2110812a3b700120c9ad6c4d60c2fc820c953\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:00:34.422606 containerd[1618]: time="2025-12-16T13:00:34.422561824Z" level=info msg="Container d1b16e55d0c05b57033939b3b3e4e4a7c5d2c1e53290dbafc1bb41592a340bd9: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:00:34.425976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1185752101.mount: Deactivated successfully. Dec 16 13:00:34.433482 containerd[1618]: time="2025-12-16T13:00:34.433448250Z" level=info msg="CreateContainer within sandbox \"3b92f585c5791e97aa395966fbf2110812a3b700120c9ad6c4d60c2fc820c953\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d1b16e55d0c05b57033939b3b3e4e4a7c5d2c1e53290dbafc1bb41592a340bd9\"" Dec 16 13:00:34.435101 containerd[1618]: time="2025-12-16T13:00:34.435073828Z" level=info msg="StartContainer for \"d1b16e55d0c05b57033939b3b3e4e4a7c5d2c1e53290dbafc1bb41592a340bd9\"" Dec 16 13:00:34.436324 containerd[1618]: time="2025-12-16T13:00:34.436254515Z" level=info msg="connecting to shim d1b16e55d0c05b57033939b3b3e4e4a7c5d2c1e53290dbafc1bb41592a340bd9" address="unix:///run/containerd/s/d4396dc045f18efd3348b72759840fecd005b361a729f2c539fa7d884d8c656b" protocol=ttrpc version=3 Dec 16 13:00:34.490187 systemd[1]: Started cri-containerd-d1b16e55d0c05b57033939b3b3e4e4a7c5d2c1e53290dbafc1bb41592a340bd9.scope - libcontainer container d1b16e55d0c05b57033939b3b3e4e4a7c5d2c1e53290dbafc1bb41592a340bd9. Dec 16 13:00:34.544690 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 13:00:34.544779 kernel: audit: type=1334 audit(1765890034.542:585): prog-id=184 op=LOAD Dec 16 13:00:34.542000 audit: BPF prog-id=184 op=LOAD Dec 16 13:00:34.553471 kernel: audit: type=1300 audit(1765890034.542:585): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3339 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:34.542000 audit[3870]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3339 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:34.562558 kernel: audit: type=1327 audit(1765890034.542:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623136653535643063303562353730333339333962336233653465 Dec 16 13:00:34.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623136653535643063303562353730333339333962336233653465 Dec 16 13:00:34.564785 kernel: audit: type=1334 audit(1765890034.543:586): prog-id=185 op=LOAD Dec 16 13:00:34.543000 audit: BPF prog-id=185 op=LOAD Dec 16 13:00:34.567042 kernel: audit: type=1300 audit(1765890034.543:586): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3339 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:34.543000 audit[3870]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3339 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:34.580039 kernel: audit: type=1327 audit(1765890034.543:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623136653535643063303562353730333339333962336233653465 Dec 16 13:00:34.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623136653535643063303562353730333339333962336233653465 Dec 16 13:00:34.582317 kernel: audit: type=1334 audit(1765890034.543:587): prog-id=185 op=UNLOAD Dec 16 13:00:34.543000 audit: BPF prog-id=185 op=UNLOAD Dec 16 13:00:34.587040 kernel: audit: type=1300 audit(1765890034.543:587): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:34.543000 audit[3870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:34.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623136653535643063303562353730333339333962336233653465 Dec 16 13:00:34.598038 kernel: audit: type=1327 audit(1765890034.543:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623136653535643063303562353730333339333962336233653465 Dec 16 13:00:34.603446 kernel: audit: type=1334 audit(1765890034.543:588): prog-id=184 op=UNLOAD Dec 16 13:00:34.543000 audit: BPF prog-id=184 op=UNLOAD Dec 16 13:00:34.543000 audit[3870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:34.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623136653535643063303562353730333339333962336233653465 Dec 16 13:00:34.543000 audit: BPF prog-id=186 op=LOAD Dec 16 13:00:34.543000 audit[3870]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3339 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:34.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623136653535643063303562353730333339333962336233653465 Dec 16 13:00:34.613403 containerd[1618]: time="2025-12-16T13:00:34.613057420Z" level=info msg="StartContainer for \"d1b16e55d0c05b57033939b3b3e4e4a7c5d2c1e53290dbafc1bb41592a340bd9\" returns successfully" Dec 16 13:00:34.694958 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:00:34.695076 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:00:34.908121 kubelet[2813]: I1216 13:00:34.908058 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd9c463-d326-4641-8a59-5c99d6568321-whisker-ca-bundle\") pod \"abd9c463-d326-4641-8a59-5c99d6568321\" (UID: \"abd9c463-d326-4641-8a59-5c99d6568321\") " Dec 16 13:00:34.908121 kubelet[2813]: I1216 13:00:34.908126 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vvql\" (UniqueName: \"kubernetes.io/projected/abd9c463-d326-4641-8a59-5c99d6568321-kube-api-access-4vvql\") pod \"abd9c463-d326-4641-8a59-5c99d6568321\" (UID: \"abd9c463-d326-4641-8a59-5c99d6568321\") " Dec 16 13:00:34.908655 kubelet[2813]: I1216 13:00:34.908149 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/abd9c463-d326-4641-8a59-5c99d6568321-whisker-backend-key-pair\") pod \"abd9c463-d326-4641-8a59-5c99d6568321\" (UID: \"abd9c463-d326-4641-8a59-5c99d6568321\") " Dec 16 13:00:34.908893 kubelet[2813]: I1216 13:00:34.908872 2813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd9c463-d326-4641-8a59-5c99d6568321-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "abd9c463-d326-4641-8a59-5c99d6568321" (UID: "abd9c463-d326-4641-8a59-5c99d6568321"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:00:34.916097 kubelet[2813]: I1216 13:00:34.915298 2813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd9c463-d326-4641-8a59-5c99d6568321-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "abd9c463-d326-4641-8a59-5c99d6568321" (UID: "abd9c463-d326-4641-8a59-5c99d6568321"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:00:34.916097 kubelet[2813]: I1216 13:00:34.915406 2813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd9c463-d326-4641-8a59-5c99d6568321-kube-api-access-4vvql" (OuterVolumeSpecName: "kube-api-access-4vvql") pod "abd9c463-d326-4641-8a59-5c99d6568321" (UID: "abd9c463-d326-4641-8a59-5c99d6568321"). InnerVolumeSpecName "kube-api-access-4vvql". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:00:35.009247 kubelet[2813]: I1216 13:00:35.009216 2813 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4vvql\" (UniqueName: \"kubernetes.io/projected/abd9c463-d326-4641-8a59-5c99d6568321-kube-api-access-4vvql\") on node \"172-239-193-244\" DevicePath \"\"" Dec 16 13:00:35.009247 kubelet[2813]: I1216 13:00:35.009239 2813 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/abd9c463-d326-4641-8a59-5c99d6568321-whisker-backend-key-pair\") on node \"172-239-193-244\" DevicePath \"\"" Dec 16 13:00:35.009247 kubelet[2813]: I1216 13:00:35.009249 2813 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd9c463-d326-4641-8a59-5c99d6568321-whisker-ca-bundle\") on node \"172-239-193-244\" DevicePath \"\"" Dec 16 13:00:35.297143 kubelet[2813]: E1216 13:00:35.295612 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:35.306096 systemd[1]: Removed slice kubepods-besteffort-podabd9c463_d326_4641_8a59_5c99d6568321.slice - libcontainer container kubepods-besteffort-podabd9c463_d326_4641_8a59_5c99d6568321.slice. Dec 16 13:00:35.339710 kubelet[2813]: I1216 13:00:35.338987 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cfknv" podStartSLOduration=1.621701688 podStartE2EDuration="12.338972602s" podCreationTimestamp="2025-12-16 13:00:23 +0000 UTC" firstStartedPulling="2025-12-16 13:00:23.679294397 +0000 UTC m=+21.625718880" lastFinishedPulling="2025-12-16 13:00:34.396565311 +0000 UTC m=+32.342989794" observedRunningTime="2025-12-16 13:00:35.318601724 +0000 UTC m=+33.265026227" watchObservedRunningTime="2025-12-16 13:00:35.338972602 +0000 UTC m=+33.285397085" Dec 16 13:00:35.366220 systemd[1]: var-lib-kubelet-pods-abd9c463\x2dd326\x2d4641\x2d8a59\x2d5c99d6568321-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4vvql.mount: Deactivated successfully. Dec 16 13:00:35.366789 systemd[1]: var-lib-kubelet-pods-abd9c463\x2dd326\x2d4641\x2d8a59\x2d5c99d6568321-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:00:35.379362 systemd[1]: Created slice kubepods-besteffort-pod537eba7d_0fc3_4664_af16_ea0352a41fb1.slice - libcontainer container kubepods-besteffort-pod537eba7d_0fc3_4664_af16_ea0352a41fb1.slice. Dec 16 13:00:35.413524 kubelet[2813]: I1216 13:00:35.413479 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/537eba7d-0fc3-4664-af16-ea0352a41fb1-whisker-ca-bundle\") pod \"whisker-555f98f96-jlzdx\" (UID: \"537eba7d-0fc3-4664-af16-ea0352a41fb1\") " pod="calico-system/whisker-555f98f96-jlzdx" Dec 16 13:00:35.413524 kubelet[2813]: I1216 13:00:35.413521 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/537eba7d-0fc3-4664-af16-ea0352a41fb1-whisker-backend-key-pair\") pod \"whisker-555f98f96-jlzdx\" (UID: \"537eba7d-0fc3-4664-af16-ea0352a41fb1\") " pod="calico-system/whisker-555f98f96-jlzdx" Dec 16 13:00:35.413524 kubelet[2813]: I1216 13:00:35.413536 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgsld\" (UniqueName: \"kubernetes.io/projected/537eba7d-0fc3-4664-af16-ea0352a41fb1-kube-api-access-sgsld\") pod \"whisker-555f98f96-jlzdx\" (UID: \"537eba7d-0fc3-4664-af16-ea0352a41fb1\") " pod="calico-system/whisker-555f98f96-jlzdx" Dec 16 13:00:35.687988 containerd[1618]: time="2025-12-16T13:00:35.687925558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555f98f96-jlzdx,Uid:537eba7d-0fc3-4664-af16-ea0352a41fb1,Namespace:calico-system,Attempt:0,}" Dec 16 13:00:35.835138 systemd-networkd[1517]: cali455d82c9966: Link UP Dec 16 13:00:35.835967 systemd-networkd[1517]: cali455d82c9966: Gained carrier Dec 16 13:00:35.850233 containerd[1618]: 2025-12-16 13:00:35.723 [INFO][3960] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:00:35.850233 containerd[1618]: 2025-12-16 13:00:35.768 [INFO][3960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0 whisker-555f98f96- calico-system 537eba7d-0fc3-4664-af16-ea0352a41fb1 940 0 2025-12-16 13:00:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:555f98f96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-193-244 whisker-555f98f96-jlzdx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali455d82c9966 [] [] }} ContainerID="927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" Namespace="calico-system" Pod="whisker-555f98f96-jlzdx" WorkloadEndpoint="172--239--193--244-k8s-whisker--555f98f96--jlzdx-" Dec 16 13:00:35.850233 containerd[1618]: 2025-12-16 13:00:35.769 [INFO][3960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" Namespace="calico-system" Pod="whisker-555f98f96-jlzdx" WorkloadEndpoint="172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0" Dec 16 13:00:35.850233 containerd[1618]: 2025-12-16 13:00:35.793 [INFO][3971] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" HandleID="k8s-pod-network.927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" Workload="172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0" Dec 16 13:00:35.850413 containerd[1618]: 2025-12-16 13:00:35.794 [INFO][3971] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" HandleID="k8s-pod-network.927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" Workload="172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-244", "pod":"whisker-555f98f96-jlzdx", "timestamp":"2025-12-16 13:00:35.793986558 +0000 UTC"}, Hostname:"172-239-193-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:00:35.850413 containerd[1618]: 2025-12-16 13:00:35.794 [INFO][3971] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:00:35.850413 containerd[1618]: 2025-12-16 13:00:35.795 [INFO][3971] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:00:35.850413 containerd[1618]: 2025-12-16 13:00:35.795 [INFO][3971] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-244' Dec 16 13:00:35.850413 containerd[1618]: 2025-12-16 13:00:35.801 [INFO][3971] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" host="172-239-193-244" Dec 16 13:00:35.850413 containerd[1618]: 2025-12-16 13:00:35.805 [INFO][3971] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-244" Dec 16 13:00:35.850413 containerd[1618]: 2025-12-16 13:00:35.809 [INFO][3971] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:35.850413 containerd[1618]: 2025-12-16 13:00:35.810 [INFO][3971] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:35.850413 containerd[1618]: 2025-12-16 13:00:35.812 [INFO][3971] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:35.850413 containerd[1618]: 2025-12-16 13:00:35.812 [INFO][3971] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" host="172-239-193-244" Dec 16 13:00:35.850647 containerd[1618]: 2025-12-16 13:00:35.813 [INFO][3971] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3 Dec 16 13:00:35.850647 containerd[1618]: 2025-12-16 13:00:35.816 [INFO][3971] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" host="172-239-193-244" Dec 16 13:00:35.850647 containerd[1618]: 2025-12-16 13:00:35.821 [INFO][3971] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.193/26] block=192.168.96.192/26 handle="k8s-pod-network.927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" host="172-239-193-244" Dec 16 13:00:35.850647 containerd[1618]: 2025-12-16 13:00:35.821 [INFO][3971] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.193/26] handle="k8s-pod-network.927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" host="172-239-193-244" Dec 16 13:00:35.850647 containerd[1618]: 2025-12-16 13:00:35.821 [INFO][3971] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:00:35.850647 containerd[1618]: 2025-12-16 13:00:35.821 [INFO][3971] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.193/26] IPv6=[] ContainerID="927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" HandleID="k8s-pod-network.927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" Workload="172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0" Dec 16 13:00:35.850758 containerd[1618]: 2025-12-16 13:00:35.825 [INFO][3960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" Namespace="calico-system" Pod="whisker-555f98f96-jlzdx" WorkloadEndpoint="172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0", GenerateName:"whisker-555f98f96-", Namespace:"calico-system", SelfLink:"", UID:"537eba7d-0fc3-4664-af16-ea0352a41fb1", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"555f98f96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"", Pod:"whisker-555f98f96-jlzdx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali455d82c9966", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:35.850758 containerd[1618]: 2025-12-16 13:00:35.825 [INFO][3960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.193/32] ContainerID="927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" Namespace="calico-system" Pod="whisker-555f98f96-jlzdx" WorkloadEndpoint="172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0" Dec 16 13:00:35.850833 containerd[1618]: 2025-12-16 13:00:35.825 [INFO][3960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali455d82c9966 ContainerID="927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" Namespace="calico-system" Pod="whisker-555f98f96-jlzdx" WorkloadEndpoint="172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0" Dec 16 13:00:35.850833 containerd[1618]: 2025-12-16 13:00:35.835 [INFO][3960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" Namespace="calico-system" Pod="whisker-555f98f96-jlzdx" WorkloadEndpoint="172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0" Dec 16 13:00:35.850875 containerd[1618]: 2025-12-16 13:00:35.835 [INFO][3960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" Namespace="calico-system" Pod="whisker-555f98f96-jlzdx" WorkloadEndpoint="172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0", GenerateName:"whisker-555f98f96-", Namespace:"calico-system", SelfLink:"", UID:"537eba7d-0fc3-4664-af16-ea0352a41fb1", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"555f98f96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3", Pod:"whisker-555f98f96-jlzdx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali455d82c9966", MAC:"2e:6b:c7:69:a2:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:35.850922 containerd[1618]: 2025-12-16 13:00:35.844 [INFO][3960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" Namespace="calico-system" Pod="whisker-555f98f96-jlzdx" WorkloadEndpoint="172--239--193--244-k8s-whisker--555f98f96--jlzdx-eth0" Dec 16 13:00:35.890956 containerd[1618]: time="2025-12-16T13:00:35.890910233Z" level=info msg="connecting to shim 927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3" address="unix:///run/containerd/s/75ea6e59cab0385ede7d91bee9496c815e441a88323176d0fa4d441367715fbf" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:00:35.924181 systemd[1]: Started cri-containerd-927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3.scope - libcontainer container 927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3. Dec 16 13:00:35.935000 audit: BPF prog-id=187 op=LOAD Dec 16 13:00:35.935000 audit: BPF prog-id=188 op=LOAD Dec 16 13:00:35.935000 audit[4004]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3992 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:35.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376432323739326531353739623765353839303963333733336239 Dec 16 13:00:35.935000 audit: BPF prog-id=188 op=UNLOAD Dec 16 13:00:35.935000 audit[4004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3992 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:35.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376432323739326531353739623765353839303963333733336239 Dec 16 13:00:35.935000 audit: BPF prog-id=189 op=LOAD Dec 16 13:00:35.935000 audit[4004]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3992 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:35.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376432323739326531353739623765353839303963333733336239 Dec 16 13:00:35.935000 audit: BPF prog-id=190 op=LOAD Dec 16 13:00:35.935000 audit[4004]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3992 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:35.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376432323739326531353739623765353839303963333733336239 Dec 16 13:00:35.935000 audit: BPF prog-id=190 op=UNLOAD Dec 16 13:00:35.935000 audit[4004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3992 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:35.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376432323739326531353739623765353839303963333733336239 Dec 16 13:00:35.935000 audit: BPF prog-id=189 op=UNLOAD Dec 16 13:00:35.935000 audit[4004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3992 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:35.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376432323739326531353739623765353839303963333733336239 Dec 16 13:00:35.935000 audit: BPF prog-id=191 op=LOAD Dec 16 13:00:35.935000 audit[4004]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3992 pid=4004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:35.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932376432323739326531353739623765353839303963333733336239 Dec 16 13:00:35.981534 containerd[1618]: time="2025-12-16T13:00:35.981452018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555f98f96-jlzdx,Uid:537eba7d-0fc3-4664-af16-ea0352a41fb1,Namespace:calico-system,Attempt:0,} returns sandbox id \"927d22792e1579b7e58909c3733b9d4624c6b61aaf3111312ef99290c78972a3\"" Dec 16 13:00:35.984495 containerd[1618]: time="2025-12-16T13:00:35.984444712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:00:36.114247 containerd[1618]: time="2025-12-16T13:00:36.114193822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:36.115778 containerd[1618]: time="2025-12-16T13:00:36.115172066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:00:36.116126 containerd[1618]: time="2025-12-16T13:00:36.115766309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:36.116763 kubelet[2813]: E1216 13:00:36.116448 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:00:36.116763 kubelet[2813]: E1216 13:00:36.116498 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:00:36.116763 kubelet[2813]: E1216 13:00:36.116572 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-555f98f96-jlzdx_calico-system(537eba7d-0fc3-4664-af16-ea0352a41fb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:36.118920 containerd[1618]: time="2025-12-16T13:00:36.118872303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:00:36.163493 kubelet[2813]: I1216 13:00:36.163420 2813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd9c463-d326-4641-8a59-5c99d6568321" path="/var/lib/kubelet/pods/abd9c463-d326-4641-8a59-5c99d6568321/volumes" Dec 16 13:00:36.259621 containerd[1618]: time="2025-12-16T13:00:36.259457906Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:36.264149 containerd[1618]: time="2025-12-16T13:00:36.264025676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:00:36.264149 containerd[1618]: time="2025-12-16T13:00:36.264104167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:36.264615 kubelet[2813]: E1216 13:00:36.264523 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:00:36.264810 kubelet[2813]: E1216 13:00:36.264758 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:00:36.265530 kubelet[2813]: E1216 13:00:36.265054 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-555f98f96-jlzdx_calico-system(537eba7d-0fc3-4664-af16-ea0352a41fb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:36.265695 kubelet[2813]: E1216 13:00:36.265592 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:00:36.302217 kubelet[2813]: E1216 13:00:36.301600 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:36.304281 kubelet[2813]: E1216 13:00:36.304234 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:00:36.342000 audit[4142]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=4142 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:36.342000 audit[4142]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffca67f5280 a2=0 a3=7ffca67f526c items=0 ppid=2965 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:36.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:36.348000 audit[4142]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=4142 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:36.348000 audit[4142]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca67f5280 a2=0 a3=0 items=0 ppid=2965 pid=4142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:36.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:36.989238 systemd-networkd[1517]: cali455d82c9966: Gained IPv6LL Dec 16 13:00:37.305976 kubelet[2813]: E1216 13:00:37.305764 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:37.311073 kubelet[2813]: E1216 13:00:37.310784 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:00:42.159423 containerd[1618]: time="2025-12-16T13:00:42.159381882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-n2rhc,Uid:d1ac913f-bfd6-4d60-abaa-3d193db00d41,Namespace:calico-system,Attempt:0,}" Dec 16 13:00:42.160931 containerd[1618]: time="2025-12-16T13:00:42.160906886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559c85bdcd-glvwq,Uid:3c1f37f8-b232-4ab7-9b50-17ad83754886,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:00:42.285360 systemd-networkd[1517]: cali9b6a1c86f6d: Link UP Dec 16 13:00:42.287605 systemd-networkd[1517]: cali9b6a1c86f6d: Gained carrier Dec 16 13:00:42.304383 containerd[1618]: 2025-12-16 13:00:42.200 [INFO][4281] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:00:42.304383 containerd[1618]: 2025-12-16 13:00:42.212 [INFO][4281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0 goldmane-7c778bb748- calico-system d1ac913f-bfd6-4d60-abaa-3d193db00d41 874 0 2025-12-16 13:00:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-193-244 goldmane-7c778bb748-n2rhc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9b6a1c86f6d [] [] }} ContainerID="90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" Namespace="calico-system" Pod="goldmane-7c778bb748-n2rhc" WorkloadEndpoint="172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-" Dec 16 13:00:42.304383 containerd[1618]: 2025-12-16 13:00:42.212 [INFO][4281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" Namespace="calico-system" Pod="goldmane-7c778bb748-n2rhc" WorkloadEndpoint="172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0" Dec 16 13:00:42.304383 containerd[1618]: 2025-12-16 13:00:42.243 [INFO][4304] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" HandleID="k8s-pod-network.90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" Workload="172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0" Dec 16 13:00:42.304574 containerd[1618]: 2025-12-16 13:00:42.243 [INFO][4304] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" HandleID="k8s-pod-network.90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" Workload="172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d51b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-244", "pod":"goldmane-7c778bb748-n2rhc", "timestamp":"2025-12-16 13:00:42.243457659 +0000 UTC"}, Hostname:"172-239-193-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:00:42.304574 containerd[1618]: 2025-12-16 13:00:42.243 [INFO][4304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:00:42.304574 containerd[1618]: 2025-12-16 13:00:42.243 [INFO][4304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:00:42.304574 containerd[1618]: 2025-12-16 13:00:42.244 [INFO][4304] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-244' Dec 16 13:00:42.304574 containerd[1618]: 2025-12-16 13:00:42.251 [INFO][4304] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" host="172-239-193-244" Dec 16 13:00:42.304574 containerd[1618]: 2025-12-16 13:00:42.255 [INFO][4304] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-244" Dec 16 13:00:42.304574 containerd[1618]: 2025-12-16 13:00:42.259 [INFO][4304] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:42.304574 containerd[1618]: 2025-12-16 13:00:42.260 [INFO][4304] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:42.304574 containerd[1618]: 2025-12-16 13:00:42.263 [INFO][4304] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:42.304773 containerd[1618]: 2025-12-16 13:00:42.263 [INFO][4304] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" host="172-239-193-244" Dec 16 13:00:42.304773 containerd[1618]: 2025-12-16 13:00:42.264 [INFO][4304] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c Dec 16 13:00:42.304773 containerd[1618]: 2025-12-16 13:00:42.267 [INFO][4304] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" host="172-239-193-244" Dec 16 13:00:42.304773 containerd[1618]: 2025-12-16 13:00:42.272 [INFO][4304] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.194/26] block=192.168.96.192/26 handle="k8s-pod-network.90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" host="172-239-193-244" Dec 16 13:00:42.304773 containerd[1618]: 2025-12-16 13:00:42.272 [INFO][4304] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.194/26] handle="k8s-pod-network.90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" host="172-239-193-244" Dec 16 13:00:42.304773 containerd[1618]: 2025-12-16 13:00:42.272 [INFO][4304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:00:42.304773 containerd[1618]: 2025-12-16 13:00:42.272 [INFO][4304] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.194/26] IPv6=[] ContainerID="90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" HandleID="k8s-pod-network.90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" Workload="172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0" Dec 16 13:00:42.304907 containerd[1618]: 2025-12-16 13:00:42.280 [INFO][4281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" Namespace="calico-system" Pod="goldmane-7c778bb748-n2rhc" WorkloadEndpoint="172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d1ac913f-bfd6-4d60-abaa-3d193db00d41", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"", Pod:"goldmane-7c778bb748-n2rhc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9b6a1c86f6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:42.304907 containerd[1618]: 2025-12-16 13:00:42.280 [INFO][4281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.194/32] ContainerID="90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" Namespace="calico-system" Pod="goldmane-7c778bb748-n2rhc" WorkloadEndpoint="172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0" Dec 16 13:00:42.304981 containerd[1618]: 2025-12-16 13:00:42.280 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b6a1c86f6d ContainerID="90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" Namespace="calico-system" Pod="goldmane-7c778bb748-n2rhc" WorkloadEndpoint="172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0" Dec 16 13:00:42.304981 containerd[1618]: 2025-12-16 13:00:42.288 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" Namespace="calico-system" Pod="goldmane-7c778bb748-n2rhc" WorkloadEndpoint="172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0" Dec 16 13:00:42.305141 containerd[1618]: 2025-12-16 13:00:42.288 [INFO][4281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" Namespace="calico-system" Pod="goldmane-7c778bb748-n2rhc" WorkloadEndpoint="172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d1ac913f-bfd6-4d60-abaa-3d193db00d41", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c", Pod:"goldmane-7c778bb748-n2rhc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9b6a1c86f6d", MAC:"96:e1:c3:d2:95:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:42.305206 containerd[1618]: 2025-12-16 13:00:42.298 [INFO][4281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" Namespace="calico-system" Pod="goldmane-7c778bb748-n2rhc" WorkloadEndpoint="172--239--193--244-k8s-goldmane--7c778bb748--n2rhc-eth0" Dec 16 13:00:42.327122 containerd[1618]: time="2025-12-16T13:00:42.327059865Z" level=info msg="connecting to shim 90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c" address="unix:///run/containerd/s/3805c6c53ab71bb6ecb9cdf62e28aa8b2624b3cdd0e270ddea58833600a31a29" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:00:42.361179 systemd[1]: Started cri-containerd-90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c.scope - libcontainer container 90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c. Dec 16 13:00:42.386965 kernel: kauditd_printk_skb: 33 callbacks suppressed Dec 16 13:00:42.387116 kernel: audit: type=1334 audit(1765890042.379:600): prog-id=192 op=LOAD Dec 16 13:00:42.379000 audit: BPF prog-id=192 op=LOAD Dec 16 13:00:42.380000 audit: BPF prog-id=193 op=LOAD Dec 16 13:00:42.392055 kernel: audit: type=1334 audit(1765890042.380:601): prog-id=193 op=LOAD Dec 16 13:00:42.380000 audit[4344]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4333 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.402439 kernel: audit: type=1300 audit(1765890042.380:601): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4333 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643166623365363162663530343663656564303663386331616230 Dec 16 13:00:42.413085 kernel: audit: type=1327 audit(1765890042.380:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643166623365363162663530343663656564303663386331616230 Dec 16 13:00:42.380000 audit: BPF prog-id=193 op=UNLOAD Dec 16 13:00:42.425042 kernel: audit: type=1334 audit(1765890042.380:602): prog-id=193 op=UNLOAD Dec 16 13:00:42.380000 audit[4344]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4333 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.435076 kernel: audit: type=1300 audit(1765890042.380:602): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4333 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.448305 kernel: audit: type=1327 audit(1765890042.380:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643166623365363162663530343663656564303663386331616230 Dec 16 13:00:42.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643166623365363162663530343663656564303663386331616230 Dec 16 13:00:42.449598 systemd-networkd[1517]: calicfbbb0a4e85: Link UP Dec 16 13:00:42.380000 audit: BPF prog-id=194 op=LOAD Dec 16 13:00:42.452962 systemd-networkd[1517]: calicfbbb0a4e85: Gained carrier Dec 16 13:00:42.453044 kernel: audit: type=1334 audit(1765890042.380:603): prog-id=194 op=LOAD Dec 16 13:00:42.463172 kernel: audit: type=1300 audit(1765890042.380:603): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4333 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.380000 audit[4344]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4333 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.477027 kernel: audit: type=1327 audit(1765890042.380:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643166623365363162663530343663656564303663386331616230 Dec 16 13:00:42.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643166623365363162663530343663656564303663386331616230 Dec 16 13:00:42.380000 audit: BPF prog-id=195 op=LOAD Dec 16 13:00:42.380000 audit[4344]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4333 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643166623365363162663530343663656564303663386331616230 Dec 16 13:00:42.380000 audit: BPF prog-id=195 op=UNLOAD Dec 16 13:00:42.380000 audit[4344]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4333 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643166623365363162663530343663656564303663386331616230 Dec 16 13:00:42.380000 audit: BPF prog-id=194 op=UNLOAD Dec 16 13:00:42.380000 audit[4344]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4333 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643166623365363162663530343663656564303663386331616230 Dec 16 13:00:42.380000 audit: BPF prog-id=196 op=LOAD Dec 16 13:00:42.380000 audit[4344]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4333 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643166623365363162663530343663656564303663386331616230 Dec 16 13:00:42.479993 containerd[1618]: time="2025-12-16T13:00:42.479948712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-n2rhc,Uid:d1ac913f-bfd6-4d60-abaa-3d193db00d41,Namespace:calico-system,Attempt:0,} returns sandbox id \"90d1fb3e61bf5046ceed06c8c1ab059540bfa7c6b20afd53b4b6316c9670ad7c\"" Dec 16 13:00:42.486199 containerd[1618]: time="2025-12-16T13:00:42.486173781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:00:42.495240 containerd[1618]: 2025-12-16 13:00:42.197 [INFO][4282] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:00:42.495240 containerd[1618]: 2025-12-16 13:00:42.211 [INFO][4282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0 calico-apiserver-559c85bdcd- calico-apiserver 3c1f37f8-b232-4ab7-9b50-17ad83754886 872 0 2025-12-16 13:00:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:559c85bdcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-193-244 calico-apiserver-559c85bdcd-glvwq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicfbbb0a4e85 [] [] }} ContainerID="2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-glvwq" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-" Dec 16 13:00:42.495240 containerd[1618]: 2025-12-16 13:00:42.211 [INFO][4282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-glvwq" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0" Dec 16 13:00:42.495240 containerd[1618]: 2025-12-16 13:00:42.254 [INFO][4306] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" HandleID="k8s-pod-network.2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" Workload="172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0" Dec 16 13:00:42.495390 containerd[1618]: 2025-12-16 13:00:42.255 [INFO][4306] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" HandleID="k8s-pod-network.2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" Workload="172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-193-244", "pod":"calico-apiserver-559c85bdcd-glvwq", "timestamp":"2025-12-16 13:00:42.254842434 +0000 UTC"}, Hostname:"172-239-193-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:00:42.495390 containerd[1618]: 2025-12-16 13:00:42.255 [INFO][4306] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:00:42.495390 containerd[1618]: 2025-12-16 13:00:42.272 [INFO][4306] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:00:42.495390 containerd[1618]: 2025-12-16 13:00:42.272 [INFO][4306] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-244' Dec 16 13:00:42.495390 containerd[1618]: 2025-12-16 13:00:42.355 [INFO][4306] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" host="172-239-193-244" Dec 16 13:00:42.495390 containerd[1618]: 2025-12-16 13:00:42.370 [INFO][4306] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-244" Dec 16 13:00:42.495390 containerd[1618]: 2025-12-16 13:00:42.378 [INFO][4306] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:42.495390 containerd[1618]: 2025-12-16 13:00:42.387 [INFO][4306] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:42.495390 containerd[1618]: 2025-12-16 13:00:42.392 [INFO][4306] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:42.495586 containerd[1618]: 2025-12-16 13:00:42.392 [INFO][4306] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" host="172-239-193-244" Dec 16 13:00:42.495586 containerd[1618]: 2025-12-16 13:00:42.402 [INFO][4306] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e Dec 16 13:00:42.495586 containerd[1618]: 2025-12-16 13:00:42.414 [INFO][4306] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" host="172-239-193-244" Dec 16 13:00:42.495586 containerd[1618]: 2025-12-16 13:00:42.421 [INFO][4306] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.195/26] block=192.168.96.192/26 handle="k8s-pod-network.2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" host="172-239-193-244" Dec 16 13:00:42.495586 containerd[1618]: 2025-12-16 13:00:42.422 [INFO][4306] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.195/26] handle="k8s-pod-network.2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" host="172-239-193-244" Dec 16 13:00:42.495586 containerd[1618]: 2025-12-16 13:00:42.422 [INFO][4306] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:00:42.495586 containerd[1618]: 2025-12-16 13:00:42.424 [INFO][4306] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.195/26] IPv6=[] ContainerID="2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" HandleID="k8s-pod-network.2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" Workload="172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0" Dec 16 13:00:42.495718 containerd[1618]: 2025-12-16 13:00:42.436 [INFO][4282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-glvwq" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0", GenerateName:"calico-apiserver-559c85bdcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c1f37f8-b232-4ab7-9b50-17ad83754886", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559c85bdcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"", Pod:"calico-apiserver-559c85bdcd-glvwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfbbb0a4e85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:42.495768 containerd[1618]: 2025-12-16 13:00:42.436 [INFO][4282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.195/32] ContainerID="2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-glvwq" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0" Dec 16 13:00:42.495768 containerd[1618]: 2025-12-16 13:00:42.436 [INFO][4282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfbbb0a4e85 ContainerID="2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-glvwq" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0" Dec 16 13:00:42.495768 containerd[1618]: 2025-12-16 13:00:42.450 [INFO][4282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-glvwq" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0" Dec 16 13:00:42.495831 containerd[1618]: 2025-12-16 13:00:42.451 [INFO][4282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-glvwq" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0", GenerateName:"calico-apiserver-559c85bdcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c1f37f8-b232-4ab7-9b50-17ad83754886", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559c85bdcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e", Pod:"calico-apiserver-559c85bdcd-glvwq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfbbb0a4e85", MAC:"3a:ac:8a:45:d7:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:42.495881 containerd[1618]: 2025-12-16 13:00:42.476 [INFO][4282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-glvwq" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--glvwq-eth0" Dec 16 13:00:42.527913 containerd[1618]: time="2025-12-16T13:00:42.527201447Z" level=info msg="connecting to shim 2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e" address="unix:///run/containerd/s/dc041ce22cefca40b859dcaffa64aec86f906ae72ddcbbc08a2b40600e777eb9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:00:42.553152 systemd[1]: Started cri-containerd-2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e.scope - libcontainer container 2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e. Dec 16 13:00:42.565000 audit: BPF prog-id=197 op=LOAD Dec 16 13:00:42.566000 audit: BPF prog-id=198 op=LOAD Dec 16 13:00:42.566000 audit[4398]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4387 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261636161356238366231323632363536643631643035393038323133 Dec 16 13:00:42.566000 audit: BPF prog-id=198 op=UNLOAD Dec 16 13:00:42.566000 audit[4398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4387 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261636161356238366231323632363536643631643035393038323133 Dec 16 13:00:42.566000 audit: BPF prog-id=199 op=LOAD Dec 16 13:00:42.566000 audit[4398]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4387 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261636161356238366231323632363536643631643035393038323133 Dec 16 13:00:42.566000 audit: BPF prog-id=200 op=LOAD Dec 16 13:00:42.566000 audit[4398]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4387 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261636161356238366231323632363536643631643035393038323133 Dec 16 13:00:42.566000 audit: BPF prog-id=200 op=UNLOAD Dec 16 13:00:42.566000 audit[4398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4387 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261636161356238366231323632363536643631643035393038323133 Dec 16 13:00:42.566000 audit: BPF prog-id=199 op=UNLOAD Dec 16 13:00:42.566000 audit[4398]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4387 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261636161356238366231323632363536643631643035393038323133 Dec 16 13:00:42.566000 audit: BPF prog-id=201 op=LOAD Dec 16 13:00:42.566000 audit[4398]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4387 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:42.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261636161356238366231323632363536643631643035393038323133 Dec 16 13:00:42.606581 containerd[1618]: time="2025-12-16T13:00:42.606478889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559c85bdcd-glvwq,Uid:3c1f37f8-b232-4ab7-9b50-17ad83754886,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2acaa5b86b1262656d61d05908213ac5e4d4b5c5c0d1a7079283834f4443de0e\"" Dec 16 13:00:42.645229 containerd[1618]: time="2025-12-16T13:00:42.645184247Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:42.646298 containerd[1618]: time="2025-12-16T13:00:42.646268031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:00:42.646654 containerd[1618]: time="2025-12-16T13:00:42.646323611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:42.646691 kubelet[2813]: E1216 13:00:42.646646 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:00:42.646691 kubelet[2813]: E1216 13:00:42.646687 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:00:42.647786 kubelet[2813]: E1216 13:00:42.646845 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-n2rhc_calico-system(d1ac913f-bfd6-4d60-abaa-3d193db00d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:42.647786 kubelet[2813]: E1216 13:00:42.646888 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:00:42.647842 containerd[1618]: time="2025-12-16T13:00:42.647222094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:00:42.779400 containerd[1618]: time="2025-12-16T13:00:42.779260357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:42.780469 containerd[1618]: time="2025-12-16T13:00:42.780431381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:00:42.780526 containerd[1618]: time="2025-12-16T13:00:42.780515721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:42.780745 kubelet[2813]: E1216 13:00:42.780699 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:42.780745 kubelet[2813]: E1216 13:00:42.780747 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:42.780913 kubelet[2813]: E1216 13:00:42.780819 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-559c85bdcd-glvwq_calico-apiserver(3c1f37f8-b232-4ab7-9b50-17ad83754886): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:42.780913 kubelet[2813]: E1216 13:00:42.780849 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:00:43.158974 containerd[1618]: time="2025-12-16T13:00:43.158761568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559c85bdcd-wnq6t,Uid:e3fb7e1f-3535-4337-a579-ff59dbec66d0,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:00:43.255601 systemd-networkd[1517]: cali22bb99f3a11: Link UP Dec 16 13:00:43.257326 systemd-networkd[1517]: cali22bb99f3a11: Gained carrier Dec 16 13:00:43.272249 containerd[1618]: 2025-12-16 13:00:43.187 [INFO][4444] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:00:43.272249 containerd[1618]: 2025-12-16 13:00:43.197 [INFO][4444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0 calico-apiserver-559c85bdcd- calico-apiserver e3fb7e1f-3535-4337-a579-ff59dbec66d0 876 0 2025-12-16 13:00:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:559c85bdcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-193-244 calico-apiserver-559c85bdcd-wnq6t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali22bb99f3a11 [] [] }} ContainerID="8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-wnq6t" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-" Dec 16 13:00:43.272249 containerd[1618]: 2025-12-16 13:00:43.197 [INFO][4444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-wnq6t" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0" Dec 16 13:00:43.272249 containerd[1618]: 2025-12-16 13:00:43.220 [INFO][4456] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" HandleID="k8s-pod-network.8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" Workload="172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0" Dec 16 13:00:43.272672 containerd[1618]: 2025-12-16 13:00:43.220 [INFO][4456] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" HandleID="k8s-pod-network.8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" Workload="172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-193-244", "pod":"calico-apiserver-559c85bdcd-wnq6t", "timestamp":"2025-12-16 13:00:43.220727176 +0000 UTC"}, Hostname:"172-239-193-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:00:43.272672 containerd[1618]: 2025-12-16 13:00:43.221 [INFO][4456] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:00:43.272672 containerd[1618]: 2025-12-16 13:00:43.221 [INFO][4456] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:00:43.272672 containerd[1618]: 2025-12-16 13:00:43.221 [INFO][4456] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-244' Dec 16 13:00:43.272672 containerd[1618]: 2025-12-16 13:00:43.226 [INFO][4456] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" host="172-239-193-244" Dec 16 13:00:43.272672 containerd[1618]: 2025-12-16 13:00:43.229 [INFO][4456] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-244" Dec 16 13:00:43.272672 containerd[1618]: 2025-12-16 13:00:43.235 [INFO][4456] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:43.272672 containerd[1618]: 2025-12-16 13:00:43.237 [INFO][4456] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:43.272672 containerd[1618]: 2025-12-16 13:00:43.240 [INFO][4456] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:43.272926 containerd[1618]: 2025-12-16 13:00:43.240 [INFO][4456] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" host="172-239-193-244" Dec 16 13:00:43.272926 containerd[1618]: 2025-12-16 13:00:43.241 [INFO][4456] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628 Dec 16 13:00:43.272926 containerd[1618]: 2025-12-16 13:00:43.245 [INFO][4456] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" host="172-239-193-244" Dec 16 13:00:43.272926 containerd[1618]: 2025-12-16 13:00:43.249 [INFO][4456] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.196/26] block=192.168.96.192/26 handle="k8s-pod-network.8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" host="172-239-193-244" Dec 16 13:00:43.272926 containerd[1618]: 2025-12-16 13:00:43.249 [INFO][4456] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.196/26] handle="k8s-pod-network.8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" host="172-239-193-244" Dec 16 13:00:43.272926 containerd[1618]: 2025-12-16 13:00:43.249 [INFO][4456] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:00:43.272926 containerd[1618]: 2025-12-16 13:00:43.249 [INFO][4456] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.196/26] IPv6=[] ContainerID="8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" HandleID="k8s-pod-network.8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" Workload="172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0" Dec 16 13:00:43.273132 containerd[1618]: 2025-12-16 13:00:43.252 [INFO][4444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-wnq6t" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0", GenerateName:"calico-apiserver-559c85bdcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3fb7e1f-3535-4337-a579-ff59dbec66d0", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559c85bdcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"", Pod:"calico-apiserver-559c85bdcd-wnq6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22bb99f3a11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:43.273212 containerd[1618]: 2025-12-16 13:00:43.252 [INFO][4444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.196/32] ContainerID="8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-wnq6t" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0" Dec 16 13:00:43.273212 containerd[1618]: 2025-12-16 13:00:43.253 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22bb99f3a11 ContainerID="8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-wnq6t" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0" Dec 16 13:00:43.273212 containerd[1618]: 2025-12-16 13:00:43.256 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-wnq6t" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0" Dec 16 13:00:43.273313 containerd[1618]: 2025-12-16 13:00:43.256 [INFO][4444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-wnq6t" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0", GenerateName:"calico-apiserver-559c85bdcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3fb7e1f-3535-4337-a579-ff59dbec66d0", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559c85bdcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628", Pod:"calico-apiserver-559c85bdcd-wnq6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali22bb99f3a11", MAC:"5e:36:22:f8:41:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:43.273368 containerd[1618]: 2025-12-16 13:00:43.267 [INFO][4444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" Namespace="calico-apiserver" Pod="calico-apiserver-559c85bdcd-wnq6t" WorkloadEndpoint="172--239--193--244-k8s-calico--apiserver--559c85bdcd--wnq6t-eth0" Dec 16 13:00:43.292213 containerd[1618]: time="2025-12-16T13:00:43.292117800Z" level=info msg="connecting to shim 8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628" address="unix:///run/containerd/s/60c97d5ddc7962c9865166f7868f5ea46fc52b6df40537fde62abbdb14683152" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:00:43.321163 systemd[1]: Started cri-containerd-8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628.scope - libcontainer container 8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628. Dec 16 13:00:43.328651 kubelet[2813]: E1216 13:00:43.328257 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:00:43.335206 kubelet[2813]: E1216 13:00:43.335183 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:00:43.340000 audit: BPF prog-id=202 op=LOAD Dec 16 13:00:43.341000 audit: BPF prog-id=203 op=LOAD Dec 16 13:00:43.341000 audit[4489]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:43.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861373536666432633535363333633237616166373762613439663963 Dec 16 13:00:43.341000 audit: BPF prog-id=203 op=UNLOAD Dec 16 13:00:43.341000 audit[4489]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:43.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861373536666432633535363333633237616166373762613439663963 Dec 16 13:00:43.342000 audit: BPF prog-id=204 op=LOAD Dec 16 13:00:43.342000 audit[4489]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:43.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861373536666432633535363333633237616166373762613439663963 Dec 16 13:00:43.342000 audit: BPF prog-id=205 op=LOAD Dec 16 13:00:43.342000 audit[4489]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:43.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861373536666432633535363333633237616166373762613439663963 Dec 16 13:00:43.342000 audit: BPF prog-id=205 op=UNLOAD Dec 16 13:00:43.342000 audit[4489]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:43.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861373536666432633535363333633237616166373762613439663963 Dec 16 13:00:43.342000 audit: BPF prog-id=204 op=UNLOAD Dec 16 13:00:43.342000 audit[4489]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:43.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861373536666432633535363333633237616166373762613439663963 Dec 16 13:00:43.342000 audit: BPF prog-id=206 op=LOAD Dec 16 13:00:43.342000 audit[4489]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:43.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861373536666432633535363333633237616166373762613439663963 Dec 16 13:00:43.365000 audit[4509]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:43.365000 audit[4509]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc72765410 a2=0 a3=7ffc727653fc items=0 ppid=2965 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:43.365000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:43.373000 audit[4509]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:43.373000 audit[4509]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc72765410 a2=0 a3=0 items=0 ppid=2965 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:43.373000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:43.389298 systemd-networkd[1517]: cali9b6a1c86f6d: Gained IPv6LL Dec 16 13:00:43.397141 containerd[1618]: time="2025-12-16T13:00:43.397101911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559c85bdcd-wnq6t,Uid:e3fb7e1f-3535-4337-a579-ff59dbec66d0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8a756fd2c55633c27aaf77ba49f9c4f42498da616f29f4141ade675f30218628\"" Dec 16 13:00:43.401951 containerd[1618]: time="2025-12-16T13:00:43.401931495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:00:43.536192 containerd[1618]: time="2025-12-16T13:00:43.535972430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:43.538042 containerd[1618]: time="2025-12-16T13:00:43.537975195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:00:43.538238 kubelet[2813]: E1216 13:00:43.538154 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:43.538238 kubelet[2813]: E1216 13:00:43.538188 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:43.538346 kubelet[2813]: E1216 13:00:43.538239 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-559c85bdcd-wnq6t_calico-apiserver(e3fb7e1f-3535-4337-a579-ff59dbec66d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:43.538346 kubelet[2813]: E1216 13:00:43.538267 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:00:43.538555 containerd[1618]: time="2025-12-16T13:00:43.538170656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:43.773704 systemd-networkd[1517]: calicfbbb0a4e85: Gained IPv6LL Dec 16 13:00:44.158778 kubelet[2813]: E1216 13:00:44.158696 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:44.159985 containerd[1618]: time="2025-12-16T13:00:44.159713389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4w8pf,Uid:c15e2c6a-d918-4414-bed7-25c87abcbc42,Namespace:kube-system,Attempt:0,}" Dec 16 13:00:44.260746 systemd-networkd[1517]: cali549c869002e: Link UP Dec 16 13:00:44.263196 systemd-networkd[1517]: cali549c869002e: Gained carrier Dec 16 13:00:44.277382 containerd[1618]: 2025-12-16 13:00:44.189 [INFO][4536] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:00:44.277382 containerd[1618]: 2025-12-16 13:00:44.199 [INFO][4536] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0 coredns-66bc5c9577- kube-system c15e2c6a-d918-4414-bed7-25c87abcbc42 865 0 2025-12-16 13:00:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-193-244 coredns-66bc5c9577-4w8pf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali549c869002e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" Namespace="kube-system" Pod="coredns-66bc5c9577-4w8pf" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-" Dec 16 13:00:44.277382 containerd[1618]: 2025-12-16 13:00:44.199 [INFO][4536] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" Namespace="kube-system" Pod="coredns-66bc5c9577-4w8pf" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0" Dec 16 13:00:44.277382 containerd[1618]: 2025-12-16 13:00:44.226 [INFO][4547] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" HandleID="k8s-pod-network.d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" Workload="172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0" Dec 16 13:00:44.277932 containerd[1618]: 2025-12-16 13:00:44.226 [INFO][4547] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" HandleID="k8s-pod-network.d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" Workload="172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-193-244", "pod":"coredns-66bc5c9577-4w8pf", "timestamp":"2025-12-16 13:00:44.226233648 +0000 UTC"}, Hostname:"172-239-193-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:00:44.277932 containerd[1618]: 2025-12-16 13:00:44.226 [INFO][4547] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:00:44.277932 containerd[1618]: 2025-12-16 13:00:44.226 [INFO][4547] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:00:44.277932 containerd[1618]: 2025-12-16 13:00:44.226 [INFO][4547] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-244' Dec 16 13:00:44.277932 containerd[1618]: 2025-12-16 13:00:44.232 [INFO][4547] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" host="172-239-193-244" Dec 16 13:00:44.277932 containerd[1618]: 2025-12-16 13:00:44.236 [INFO][4547] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-244" Dec 16 13:00:44.277932 containerd[1618]: 2025-12-16 13:00:44.239 [INFO][4547] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:44.277932 containerd[1618]: 2025-12-16 13:00:44.241 [INFO][4547] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:44.277932 containerd[1618]: 2025-12-16 13:00:44.243 [INFO][4547] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:44.277932 containerd[1618]: 2025-12-16 13:00:44.243 [INFO][4547] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" host="172-239-193-244" Dec 16 13:00:44.278256 containerd[1618]: 2025-12-16 13:00:44.244 [INFO][4547] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5 Dec 16 13:00:44.278256 containerd[1618]: 2025-12-16 13:00:44.247 [INFO][4547] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" host="172-239-193-244" Dec 16 13:00:44.278256 containerd[1618]: 2025-12-16 13:00:44.252 [INFO][4547] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.197/26] block=192.168.96.192/26 handle="k8s-pod-network.d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" host="172-239-193-244" Dec 16 13:00:44.278256 containerd[1618]: 2025-12-16 13:00:44.252 [INFO][4547] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.197/26] handle="k8s-pod-network.d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" host="172-239-193-244" Dec 16 13:00:44.278256 containerd[1618]: 2025-12-16 13:00:44.252 [INFO][4547] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:00:44.278256 containerd[1618]: 2025-12-16 13:00:44.252 [INFO][4547] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.197/26] IPv6=[] ContainerID="d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" HandleID="k8s-pod-network.d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" Workload="172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0" Dec 16 13:00:44.278499 containerd[1618]: 2025-12-16 13:00:44.258 [INFO][4536] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" Namespace="kube-system" Pod="coredns-66bc5c9577-4w8pf" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c15e2c6a-d918-4414-bed7-25c87abcbc42", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"", Pod:"coredns-66bc5c9577-4w8pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali549c869002e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:44.278499 containerd[1618]: 2025-12-16 13:00:44.258 [INFO][4536] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.197/32] ContainerID="d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" Namespace="kube-system" Pod="coredns-66bc5c9577-4w8pf" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0" Dec 16 13:00:44.278499 containerd[1618]: 2025-12-16 13:00:44.258 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali549c869002e ContainerID="d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" Namespace="kube-system" Pod="coredns-66bc5c9577-4w8pf" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0" Dec 16 13:00:44.278499 containerd[1618]: 2025-12-16 13:00:44.261 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" Namespace="kube-system" Pod="coredns-66bc5c9577-4w8pf" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0" Dec 16 13:00:44.278499 containerd[1618]: 2025-12-16 13:00:44.261 [INFO][4536] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" Namespace="kube-system" Pod="coredns-66bc5c9577-4w8pf" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c15e2c6a-d918-4414-bed7-25c87abcbc42", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5", Pod:"coredns-66bc5c9577-4w8pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali549c869002e", MAC:"d2:38:4b:73:68:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:44.278499 containerd[1618]: 2025-12-16 13:00:44.272 [INFO][4536] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" Namespace="kube-system" Pod="coredns-66bc5c9577-4w8pf" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--4w8pf-eth0" Dec 16 13:00:44.298352 containerd[1618]: time="2025-12-16T13:00:44.298315411Z" level=info msg="connecting to shim d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5" address="unix:///run/containerd/s/b71fb0325a2628a972bb1a6a2342f698ef22b0081b06a43b3f75e534c8a3125e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:00:44.330160 systemd[1]: Started cri-containerd-d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5.scope - libcontainer container d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5. Dec 16 13:00:44.340036 kubelet[2813]: E1216 13:00:44.339992 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:00:44.340170 kubelet[2813]: E1216 13:00:44.339995 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:00:44.342107 kubelet[2813]: E1216 13:00:44.342074 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:00:44.349377 systemd-networkd[1517]: cali22bb99f3a11: Gained IPv6LL Dec 16 13:00:44.394000 audit[4598]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:44.394000 audit[4598]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd58aa2660 a2=0 a3=7ffd58aa264c items=0 ppid=2965 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:44.400000 audit: BPF prog-id=207 op=LOAD Dec 16 13:00:44.401000 audit: BPF prog-id=208 op=LOAD Dec 16 13:00:44.401000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4566 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433663361363462323766323737373366653430663737393064383563 Dec 16 13:00:44.401000 audit: BPF prog-id=208 op=UNLOAD Dec 16 13:00:44.401000 audit[4579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4566 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433663361363462323766323737373366653430663737393064383563 Dec 16 13:00:44.401000 audit[4598]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:44.401000 audit[4598]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd58aa2660 a2=0 a3=0 items=0 ppid=2965 pid=4598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.401000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:44.401000 audit: BPF prog-id=209 op=LOAD Dec 16 13:00:44.401000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4566 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433663361363462323766323737373366653430663737393064383563 Dec 16 13:00:44.401000 audit: BPF prog-id=210 op=LOAD Dec 16 13:00:44.401000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4566 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433663361363462323766323737373366653430663737393064383563 Dec 16 13:00:44.401000 audit: BPF prog-id=210 op=UNLOAD Dec 16 13:00:44.401000 audit[4579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4566 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433663361363462323766323737373366653430663737393064383563 Dec 16 13:00:44.401000 audit: BPF prog-id=209 op=UNLOAD Dec 16 13:00:44.401000 audit[4579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4566 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433663361363462323766323737373366653430663737393064383563 Dec 16 13:00:44.401000 audit: BPF prog-id=211 op=LOAD Dec 16 13:00:44.401000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4566 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433663361363462323766323737373366653430663737393064383563 Dec 16 13:00:44.463602 containerd[1618]: time="2025-12-16T13:00:44.463569285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4w8pf,Uid:c15e2c6a-d918-4414-bed7-25c87abcbc42,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5\"" Dec 16 13:00:44.464348 kubelet[2813]: E1216 13:00:44.464322 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:44.469251 containerd[1618]: time="2025-12-16T13:00:44.469228171Z" level=info msg="CreateContainer within sandbox \"d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:00:44.479034 containerd[1618]: time="2025-12-16T13:00:44.478140625Z" level=info msg="Container e51ca41c09eda6de10f334bbb096516796202bd39142b50d4ef2172670884d3d: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:00:44.483715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707618506.mount: Deactivated successfully. Dec 16 13:00:44.486354 containerd[1618]: time="2025-12-16T13:00:44.486329527Z" level=info msg="CreateContainer within sandbox \"d3f3a64b27f27773fe40f7790d85c386aad2c366c9ca01a67204b73b4219b8d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e51ca41c09eda6de10f334bbb096516796202bd39142b50d4ef2172670884d3d\"" Dec 16 13:00:44.486847 containerd[1618]: time="2025-12-16T13:00:44.486824968Z" level=info msg="StartContainer for \"e51ca41c09eda6de10f334bbb096516796202bd39142b50d4ef2172670884d3d\"" Dec 16 13:00:44.488191 containerd[1618]: time="2025-12-16T13:00:44.488167932Z" level=info msg="connecting to shim e51ca41c09eda6de10f334bbb096516796202bd39142b50d4ef2172670884d3d" address="unix:///run/containerd/s/b71fb0325a2628a972bb1a6a2342f698ef22b0081b06a43b3f75e534c8a3125e" protocol=ttrpc version=3 Dec 16 13:00:44.512163 systemd[1]: Started cri-containerd-e51ca41c09eda6de10f334bbb096516796202bd39142b50d4ef2172670884d3d.scope - libcontainer container e51ca41c09eda6de10f334bbb096516796202bd39142b50d4ef2172670884d3d. Dec 16 13:00:44.536000 audit: BPF prog-id=212 op=LOAD Dec 16 13:00:44.538000 audit: BPF prog-id=213 op=LOAD Dec 16 13:00:44.538000 audit[4606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4566 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535316361343163303965646136646531306633333462626230393635 Dec 16 13:00:44.538000 audit: BPF prog-id=213 op=UNLOAD Dec 16 13:00:44.538000 audit[4606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4566 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535316361343163303965646136646531306633333462626230393635 Dec 16 13:00:44.539000 audit: BPF prog-id=214 op=LOAD Dec 16 13:00:44.539000 audit[4606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4566 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535316361343163303965646136646531306633333462626230393635 Dec 16 13:00:44.539000 audit: BPF prog-id=215 op=LOAD Dec 16 13:00:44.539000 audit[4606]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4566 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535316361343163303965646136646531306633333462626230393635 Dec 16 13:00:44.539000 audit: BPF prog-id=215 op=UNLOAD Dec 16 13:00:44.539000 audit[4606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4566 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535316361343163303965646136646531306633333462626230393635 Dec 16 13:00:44.539000 audit: BPF prog-id=214 op=UNLOAD Dec 16 13:00:44.539000 audit[4606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4566 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535316361343163303965646136646531306633333462626230393635 Dec 16 13:00:44.540000 audit: BPF prog-id=216 op=LOAD Dec 16 13:00:44.540000 audit[4606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4566 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:44.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535316361343163303965646136646531306633333462626230393635 Dec 16 13:00:44.560917 containerd[1618]: time="2025-12-16T13:00:44.560875737Z" level=info msg="StartContainer for \"e51ca41c09eda6de10f334bbb096516796202bd39142b50d4ef2172670884d3d\" returns successfully" Dec 16 13:00:45.159353 containerd[1618]: time="2025-12-16T13:00:45.159315228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-579dd74c6f-9xbhn,Uid:7a67396c-f7d6-4b07-9621-f2a99ef21577,Namespace:calico-system,Attempt:0,}" Dec 16 13:00:45.160041 kubelet[2813]: E1216 13:00:45.159894 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:45.160349 containerd[1618]: time="2025-12-16T13:00:45.160305811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cwnsn,Uid:ed7f4f33-5b34-46d1-90ea-2333fc7431f2,Namespace:kube-system,Attempt:0,}" Dec 16 13:00:45.290988 systemd-networkd[1517]: calic7cfd7c152a: Link UP Dec 16 13:00:45.292637 systemd-networkd[1517]: calic7cfd7c152a: Gained carrier Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.202 [INFO][4661] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.216 [INFO][4661] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0 coredns-66bc5c9577- kube-system ed7f4f33-5b34-46d1-90ea-2333fc7431f2 873 0 2025-12-16 13:00:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-193-244 coredns-66bc5c9577-cwnsn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic7cfd7c152a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" Namespace="kube-system" Pod="coredns-66bc5c9577-cwnsn" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-" Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.216 [INFO][4661] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" Namespace="kube-system" Pod="coredns-66bc5c9577-cwnsn" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0" Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.246 [INFO][4685] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" HandleID="k8s-pod-network.04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" Workload="172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0" Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.247 [INFO][4685] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" HandleID="k8s-pod-network.04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" Workload="172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-193-244", "pod":"coredns-66bc5c9577-cwnsn", "timestamp":"2025-12-16 13:00:45.246901329 +0000 UTC"}, Hostname:"172-239-193-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.247 [INFO][4685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.247 [INFO][4685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.247 [INFO][4685] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-244' Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.256 [INFO][4685] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" host="172-239-193-244" Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.260 [INFO][4685] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-244" Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.265 [INFO][4685] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.267 [INFO][4685] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.269 [INFO][4685] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.269 [INFO][4685] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" host="172-239-193-244" Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.271 [INFO][4685] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.275 [INFO][4685] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" host="172-239-193-244" Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.280 [INFO][4685] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.198/26] block=192.168.96.192/26 handle="k8s-pod-network.04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" host="172-239-193-244" Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.280 [INFO][4685] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.198/26] handle="k8s-pod-network.04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" host="172-239-193-244" Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.280 [INFO][4685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:00:45.311221 containerd[1618]: 2025-12-16 13:00:45.280 [INFO][4685] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.198/26] IPv6=[] ContainerID="04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" HandleID="k8s-pod-network.04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" Workload="172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0" Dec 16 13:00:45.311969 containerd[1618]: 2025-12-16 13:00:45.284 [INFO][4661] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" Namespace="kube-system" Pod="coredns-66bc5c9577-cwnsn" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ed7f4f33-5b34-46d1-90ea-2333fc7431f2", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"", Pod:"coredns-66bc5c9577-cwnsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7cfd7c152a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:45.311969 containerd[1618]: 2025-12-16 13:00:45.284 [INFO][4661] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.198/32] ContainerID="04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" Namespace="kube-system" Pod="coredns-66bc5c9577-cwnsn" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0" Dec 16 13:00:45.311969 containerd[1618]: 2025-12-16 13:00:45.284 [INFO][4661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7cfd7c152a ContainerID="04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" Namespace="kube-system" Pod="coredns-66bc5c9577-cwnsn" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0" Dec 16 13:00:45.311969 containerd[1618]: 2025-12-16 13:00:45.296 [INFO][4661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" Namespace="kube-system" Pod="coredns-66bc5c9577-cwnsn" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0" Dec 16 13:00:45.311969 containerd[1618]: 2025-12-16 13:00:45.296 [INFO][4661] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" Namespace="kube-system" Pod="coredns-66bc5c9577-cwnsn" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ed7f4f33-5b34-46d1-90ea-2333fc7431f2", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe", Pod:"coredns-66bc5c9577-cwnsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7cfd7c152a", MAC:"6a:04:00:fc:e0:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:45.311969 containerd[1618]: 2025-12-16 13:00:45.309 [INFO][4661] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" Namespace="kube-system" Pod="coredns-66bc5c9577-cwnsn" WorkloadEndpoint="172--239--193--244-k8s-coredns--66bc5c9577--cwnsn-eth0" Dec 16 13:00:45.330707 containerd[1618]: time="2025-12-16T13:00:45.329965218Z" level=info msg="connecting to shim 04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe" address="unix:///run/containerd/s/d3a02913e2dd81ebf044e60cb5d0f7583a25798f79f07ec23f2771f262304369" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:00:45.346665 kubelet[2813]: E1216 13:00:45.346621 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:45.349820 kubelet[2813]: E1216 13:00:45.349752 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:00:45.368177 systemd[1]: Started cri-containerd-04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe.scope - libcontainer container 04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe. Dec 16 13:00:45.374631 kubelet[2813]: I1216 13:00:45.373967 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4w8pf" podStartSLOduration=36.373270987 podStartE2EDuration="36.373270987s" podCreationTimestamp="2025-12-16 13:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:00:45.371732703 +0000 UTC m=+43.318157186" watchObservedRunningTime="2025-12-16 13:00:45.373270987 +0000 UTC m=+43.319695470" Dec 16 13:00:45.398000 audit: BPF prog-id=217 op=LOAD Dec 16 13:00:45.398000 audit: BPF prog-id=218 op=LOAD Dec 16 13:00:45.398000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4712 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653462613832383231336265303162626338663239323662376430 Dec 16 13:00:45.398000 audit: BPF prog-id=218 op=UNLOAD Dec 16 13:00:45.398000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653462613832383231336265303162626338663239323662376430 Dec 16 13:00:45.399000 audit: BPF prog-id=219 op=LOAD Dec 16 13:00:45.399000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4712 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653462613832383231336265303162626338663239323662376430 Dec 16 13:00:45.399000 audit: BPF prog-id=220 op=LOAD Dec 16 13:00:45.399000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4712 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653462613832383231336265303162626338663239323662376430 Dec 16 13:00:45.399000 audit: BPF prog-id=220 op=UNLOAD Dec 16 13:00:45.399000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653462613832383231336265303162626338663239323662376430 Dec 16 13:00:45.399000 audit: BPF prog-id=219 op=UNLOAD Dec 16 13:00:45.399000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653462613832383231336265303162626338663239323662376430 Dec 16 13:00:45.400000 audit: BPF prog-id=221 op=LOAD Dec 16 13:00:45.400000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4712 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653462613832383231336265303162626338663239323662376430 Dec 16 13:00:45.455492 systemd-networkd[1517]: calif55941d38cb: Link UP Dec 16 13:00:45.460099 systemd-networkd[1517]: calif55941d38cb: Gained carrier Dec 16 13:00:45.456000 audit[4744]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=4744 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:45.456000 audit[4744]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff32f07b40 a2=0 a3=7fff32f07b2c items=0 ppid=2965 pid=4744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.456000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:45.470857 containerd[1618]: time="2025-12-16T13:00:45.470683013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cwnsn,Uid:ed7f4f33-5b34-46d1-90ea-2333fc7431f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe\"" Dec 16 13:00:45.469000 audit[4744]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=4744 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:45.469000 audit[4744]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff32f07b40 a2=0 a3=0 items=0 ppid=2965 pid=4744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:45.477877 kubelet[2813]: E1216 13:00:45.477838 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:45.490748 containerd[1618]: time="2025-12-16T13:00:45.490710063Z" level=info msg="CreateContainer within sandbox \"04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.207 [INFO][4659] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.218 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0 calico-kube-controllers-579dd74c6f- calico-system 7a67396c-f7d6-4b07-9621-f2a99ef21577 871 0 2025-12-16 13:00:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:579dd74c6f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-193-244 calico-kube-controllers-579dd74c6f-9xbhn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif55941d38cb [] [] }} ContainerID="4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" Namespace="calico-system" Pod="calico-kube-controllers-579dd74c6f-9xbhn" WorkloadEndpoint="172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.218 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" Namespace="calico-system" Pod="calico-kube-controllers-579dd74c6f-9xbhn" WorkloadEndpoint="172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.249 [INFO][4687] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" HandleID="k8s-pod-network.4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" Workload="172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.251 [INFO][4687] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" HandleID="k8s-pod-network.4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" Workload="172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003323b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-244", "pod":"calico-kube-controllers-579dd74c6f-9xbhn", "timestamp":"2025-12-16 13:00:45.249495205 +0000 UTC"}, Hostname:"172-239-193-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.251 [INFO][4687] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.281 [INFO][4687] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.282 [INFO][4687] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-244' Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.361 [INFO][4687] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" host="172-239-193-244" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.378 [INFO][4687] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-244" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.389 [INFO][4687] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.391 [INFO][4687] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.396 [INFO][4687] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.396 [INFO][4687] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" host="172-239-193-244" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.398 [INFO][4687] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07 Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.403 [INFO][4687] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" host="172-239-193-244" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.414 [INFO][4687] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.199/26] block=192.168.96.192/26 handle="k8s-pod-network.4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" host="172-239-193-244" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.414 [INFO][4687] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.199/26] handle="k8s-pod-network.4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" host="172-239-193-244" Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.414 [INFO][4687] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:00:45.491472 containerd[1618]: 2025-12-16 13:00:45.414 [INFO][4687] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.199/26] IPv6=[] ContainerID="4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" HandleID="k8s-pod-network.4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" Workload="172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0" Dec 16 13:00:45.491918 containerd[1618]: 2025-12-16 13:00:45.429 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" Namespace="calico-system" Pod="calico-kube-controllers-579dd74c6f-9xbhn" WorkloadEndpoint="172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0", GenerateName:"calico-kube-controllers-579dd74c6f-", Namespace:"calico-system", SelfLink:"", UID:"7a67396c-f7d6-4b07-9621-f2a99ef21577", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"579dd74c6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"", Pod:"calico-kube-controllers-579dd74c6f-9xbhn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif55941d38cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:45.491918 containerd[1618]: 2025-12-16 13:00:45.438 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.199/32] ContainerID="4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" Namespace="calico-system" Pod="calico-kube-controllers-579dd74c6f-9xbhn" WorkloadEndpoint="172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0" Dec 16 13:00:45.491918 containerd[1618]: 2025-12-16 13:00:45.439 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif55941d38cb ContainerID="4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" Namespace="calico-system" Pod="calico-kube-controllers-579dd74c6f-9xbhn" WorkloadEndpoint="172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0" Dec 16 13:00:45.491918 containerd[1618]: 2025-12-16 13:00:45.461 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" Namespace="calico-system" Pod="calico-kube-controllers-579dd74c6f-9xbhn" WorkloadEndpoint="172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0" Dec 16 13:00:45.491918 containerd[1618]: 2025-12-16 13:00:45.463 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" Namespace="calico-system" Pod="calico-kube-controllers-579dd74c6f-9xbhn" WorkloadEndpoint="172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0", GenerateName:"calico-kube-controllers-579dd74c6f-", Namespace:"calico-system", SelfLink:"", UID:"7a67396c-f7d6-4b07-9621-f2a99ef21577", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"579dd74c6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07", Pod:"calico-kube-controllers-579dd74c6f-9xbhn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif55941d38cb", MAC:"7a:64:1a:9b:d5:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:45.491918 containerd[1618]: 2025-12-16 13:00:45.482 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" Namespace="calico-system" Pod="calico-kube-controllers-579dd74c6f-9xbhn" WorkloadEndpoint="172--239--193--244-k8s-calico--kube--controllers--579dd74c6f--9xbhn-eth0" Dec 16 13:00:45.505265 containerd[1618]: time="2025-12-16T13:00:45.505242100Z" level=info msg="Container 42fd3a12524fb4f88087194d812c15bcd595644e2d731129e8edea770143fd9e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:00:45.522027 containerd[1618]: time="2025-12-16T13:00:45.520760619Z" level=info msg="CreateContainer within sandbox \"04e4ba828213be01bbc8f2926b7d01ea46caeb29e0c62e9d62f12fa589daf1fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"42fd3a12524fb4f88087194d812c15bcd595644e2d731129e8edea770143fd9e\"" Dec 16 13:00:45.522173 containerd[1618]: time="2025-12-16T13:00:45.522153132Z" level=info msg="StartContainer for \"42fd3a12524fb4f88087194d812c15bcd595644e2d731129e8edea770143fd9e\"" Dec 16 13:00:45.527326 containerd[1618]: time="2025-12-16T13:00:45.527279465Z" level=info msg="connecting to shim 4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07" address="unix:///run/containerd/s/2fc6e8ba9ba03d3d0869194590fccb1129807e13c3f54e4feda103b3a870483a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:00:45.527601 containerd[1618]: time="2025-12-16T13:00:45.527575016Z" level=info msg="connecting to shim 42fd3a12524fb4f88087194d812c15bcd595644e2d731129e8edea770143fd9e" address="unix:///run/containerd/s/d3a02913e2dd81ebf044e60cb5d0f7583a25798f79f07ec23f2771f262304369" protocol=ttrpc version=3 Dec 16 13:00:45.557373 systemd[1]: Started cri-containerd-42fd3a12524fb4f88087194d812c15bcd595644e2d731129e8edea770143fd9e.scope - libcontainer container 42fd3a12524fb4f88087194d812c15bcd595644e2d731129e8edea770143fd9e. Dec 16 13:00:45.576451 systemd[1]: Started cri-containerd-4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07.scope - libcontainer container 4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07. Dec 16 13:00:45.589000 audit: BPF prog-id=222 op=LOAD Dec 16 13:00:45.590000 audit: BPF prog-id=223 op=LOAD Dec 16 13:00:45.590000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4712 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666433613132353234666234663838303837313934643831326331 Dec 16 13:00:45.590000 audit: BPF prog-id=223 op=UNLOAD Dec 16 13:00:45.590000 audit[4770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666433613132353234666234663838303837313934643831326331 Dec 16 13:00:45.590000 audit: BPF prog-id=224 op=LOAD Dec 16 13:00:45.590000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4712 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666433613132353234666234663838303837313934643831326331 Dec 16 13:00:45.590000 audit: BPF prog-id=225 op=LOAD Dec 16 13:00:45.590000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4712 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666433613132353234666234663838303837313934643831326331 Dec 16 13:00:45.590000 audit: BPF prog-id=225 op=UNLOAD Dec 16 13:00:45.590000 audit[4770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666433613132353234666234663838303837313934643831326331 Dec 16 13:00:45.590000 audit: BPF prog-id=224 op=UNLOAD Dec 16 13:00:45.590000 audit[4770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4712 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666433613132353234666234663838303837313934643831326331 Dec 16 13:00:45.590000 audit: BPF prog-id=226 op=LOAD Dec 16 13:00:45.590000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4712 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432666433613132353234666234663838303837313934643831326331 Dec 16 13:00:45.622814 containerd[1618]: time="2025-12-16T13:00:45.622772646Z" level=info msg="StartContainer for \"42fd3a12524fb4f88087194d812c15bcd595644e2d731129e8edea770143fd9e\" returns successfully" Dec 16 13:00:45.667000 audit: BPF prog-id=227 op=LOAD Dec 16 13:00:45.668000 audit: BPF prog-id=228 op=LOAD Dec 16 13:00:45.668000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343333636362363232666335363632636135666330613466323663 Dec 16 13:00:45.668000 audit: BPF prog-id=228 op=UNLOAD Dec 16 13:00:45.668000 audit[4778]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343333636362363232666335363632636135666330613466323663 Dec 16 13:00:45.668000 audit: BPF prog-id=229 op=LOAD Dec 16 13:00:45.668000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343333636362363232666335363632636135666330613466323663 Dec 16 13:00:45.668000 audit: BPF prog-id=230 op=LOAD Dec 16 13:00:45.668000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343333636362363232666335363632636135666330613466323663 Dec 16 13:00:45.668000 audit: BPF prog-id=230 op=UNLOAD Dec 16 13:00:45.668000 audit[4778]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343333636362363232666335363632636135666330613466323663 Dec 16 13:00:45.668000 audit: BPF prog-id=229 op=UNLOAD Dec 16 13:00:45.668000 audit[4778]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343333636362363232666335363632636135666330613466323663 Dec 16 13:00:45.669000 audit: BPF prog-id=231 op=LOAD Dec 16 13:00:45.669000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4766 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:45.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343333636362363232666335363632636135666330613466323663 Dec 16 13:00:45.697077 kubelet[2813]: I1216 13:00:45.697042 2813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:00:45.697891 kubelet[2813]: E1216 13:00:45.697861 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:45.724371 containerd[1618]: time="2025-12-16T13:00:45.724239181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-579dd74c6f-9xbhn,Uid:7a67396c-f7d6-4b07-9621-f2a99ef21577,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d433ccb622fc5662ca5fc0a4f26c61e9bee16a247a9ea5da3182fd00858fc07\"" Dec 16 13:00:45.730917 containerd[1618]: time="2025-12-16T13:00:45.730875168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:00:45.869699 containerd[1618]: time="2025-12-16T13:00:45.869635018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:45.871037 containerd[1618]: time="2025-12-16T13:00:45.870956291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:00:45.871244 containerd[1618]: time="2025-12-16T13:00:45.871133091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:45.871848 kubelet[2813]: E1216 13:00:45.871734 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:00:45.871848 kubelet[2813]: E1216 13:00:45.871806 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:00:45.872308 kubelet[2813]: E1216 13:00:45.872235 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-579dd74c6f-9xbhn_calico-system(7a67396c-f7d6-4b07-9621-f2a99ef21577): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:45.872308 kubelet[2813]: E1216 13:00:45.872271 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:00:46.161838 containerd[1618]: time="2025-12-16T13:00:46.161790918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8llz,Uid:1c658f10-e923-42a5-b425-72ee5f2a64c8,Namespace:calico-system,Attempt:0,}" Dec 16 13:00:46.173316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3961381846.mount: Deactivated successfully. Dec 16 13:00:46.206117 systemd-networkd[1517]: cali549c869002e: Gained IPv6LL Dec 16 13:00:46.318647 systemd-networkd[1517]: cali4fb5e1b653c: Link UP Dec 16 13:00:46.320253 systemd-networkd[1517]: cali4fb5e1b653c: Gained carrier Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.227 [INFO][4876] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.241 [INFO][4876] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--244-k8s-csi--node--driver--n8llz-eth0 csi-node-driver- calico-system 1c658f10-e923-42a5-b425-72ee5f2a64c8 769 0 2025-12-16 13:00:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-193-244 csi-node-driver-n8llz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4fb5e1b653c [] [] }} ContainerID="317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" Namespace="calico-system" Pod="csi-node-driver-n8llz" WorkloadEndpoint="172--239--193--244-k8s-csi--node--driver--n8llz-" Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.241 [INFO][4876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" Namespace="calico-system" Pod="csi-node-driver-n8llz" WorkloadEndpoint="172--239--193--244-k8s-csi--node--driver--n8llz-eth0" Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.276 [INFO][4884] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" HandleID="k8s-pod-network.317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" Workload="172--239--193--244-k8s-csi--node--driver--n8llz-eth0" Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.276 [INFO][4884] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" HandleID="k8s-pod-network.317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" Workload="172--239--193--244-k8s-csi--node--driver--n8llz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4f10), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-244", "pod":"csi-node-driver-n8llz", "timestamp":"2025-12-16 13:00:46.27670247 +0000 UTC"}, Hostname:"172-239-193-244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.276 [INFO][4884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.277 [INFO][4884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.277 [INFO][4884] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-244' Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.283 [INFO][4884] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" host="172-239-193-244" Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.286 [INFO][4884] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-193-244" Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.293 [INFO][4884] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.295 [INFO][4884] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.298 [INFO][4884] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="172-239-193-244" Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.298 [INFO][4884] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" host="172-239-193-244" Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.300 [INFO][4884] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6 Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.304 [INFO][4884] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" host="172-239-193-244" Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.311 [INFO][4884] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.200/26] block=192.168.96.192/26 handle="k8s-pod-network.317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" host="172-239-193-244" Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.311 [INFO][4884] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.200/26] handle="k8s-pod-network.317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" host="172-239-193-244" Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.311 [INFO][4884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:00:46.335969 containerd[1618]: 2025-12-16 13:00:46.311 [INFO][4884] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.200/26] IPv6=[] ContainerID="317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" HandleID="k8s-pod-network.317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" Workload="172--239--193--244-k8s-csi--node--driver--n8llz-eth0" Dec 16 13:00:46.336812 containerd[1618]: 2025-12-16 13:00:46.314 [INFO][4876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" Namespace="calico-system" Pod="csi-node-driver-n8llz" WorkloadEndpoint="172--239--193--244-k8s-csi--node--driver--n8llz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-csi--node--driver--n8llz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c658f10-e923-42a5-b425-72ee5f2a64c8", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"", Pod:"csi-node-driver-n8llz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4fb5e1b653c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:46.336812 containerd[1618]: 2025-12-16 13:00:46.314 [INFO][4876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.200/32] ContainerID="317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" Namespace="calico-system" Pod="csi-node-driver-n8llz" WorkloadEndpoint="172--239--193--244-k8s-csi--node--driver--n8llz-eth0" Dec 16 13:00:46.336812 containerd[1618]: 2025-12-16 13:00:46.314 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fb5e1b653c ContainerID="317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" Namespace="calico-system" Pod="csi-node-driver-n8llz" WorkloadEndpoint="172--239--193--244-k8s-csi--node--driver--n8llz-eth0" Dec 16 13:00:46.336812 containerd[1618]: 2025-12-16 13:00:46.320 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" Namespace="calico-system" Pod="csi-node-driver-n8llz" WorkloadEndpoint="172--239--193--244-k8s-csi--node--driver--n8llz-eth0" Dec 16 13:00:46.336812 containerd[1618]: 2025-12-16 13:00:46.321 [INFO][4876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" Namespace="calico-system" Pod="csi-node-driver-n8llz" WorkloadEndpoint="172--239--193--244-k8s-csi--node--driver--n8llz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--244-k8s-csi--node--driver--n8llz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c658f10-e923-42a5-b425-72ee5f2a64c8", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-244", ContainerID:"317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6", Pod:"csi-node-driver-n8llz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4fb5e1b653c", MAC:"d2:e2:d2:e7:dc:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:00:46.336812 containerd[1618]: 2025-12-16 13:00:46.330 [INFO][4876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" Namespace="calico-system" Pod="csi-node-driver-n8llz" WorkloadEndpoint="172--239--193--244-k8s-csi--node--driver--n8llz-eth0" Dec 16 13:00:46.356446 containerd[1618]: time="2025-12-16T13:00:46.356152207Z" level=info msg="connecting to shim 317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6" address="unix:///run/containerd/s/c28dc56d05a01a58a8f1945e71d09bcdca5d15fabb85a6532dffb2bd95a6e6c1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:00:46.360490 kubelet[2813]: E1216 13:00:46.360193 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:00:46.382435 kubelet[2813]: E1216 13:00:46.381399 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:46.382435 kubelet[2813]: E1216 13:00:46.382386 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:46.384449 kubelet[2813]: E1216 13:00:46.384356 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:46.403375 systemd[1]: Started cri-containerd-317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6.scope - libcontainer container 317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6. Dec 16 13:00:46.422690 kubelet[2813]: I1216 13:00:46.422349 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cwnsn" podStartSLOduration=37.422335654 podStartE2EDuration="37.422335654s" podCreationTimestamp="2025-12-16 13:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:00:46.410616106 +0000 UTC m=+44.357040589" watchObservedRunningTime="2025-12-16 13:00:46.422335654 +0000 UTC m=+44.368760137" Dec 16 13:00:46.431000 audit: BPF prog-id=232 op=LOAD Dec 16 13:00:46.435000 audit: BPF prog-id=233 op=LOAD Dec 16 13:00:46.435000 audit[4917]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4905 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376634336430323263353534613163313338383336346633616234 Dec 16 13:00:46.435000 audit: BPF prog-id=233 op=UNLOAD Dec 16 13:00:46.435000 audit[4917]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376634336430323263353534613163313338383336346633616234 Dec 16 13:00:46.435000 audit: BPF prog-id=234 op=LOAD Dec 16 13:00:46.435000 audit[4917]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4905 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376634336430323263353534613163313338383336346633616234 Dec 16 13:00:46.435000 audit: BPF prog-id=235 op=LOAD Dec 16 13:00:46.435000 audit[4917]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4905 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376634336430323263353534613163313338383336346633616234 Dec 16 13:00:46.435000 audit: BPF prog-id=235 op=UNLOAD Dec 16 13:00:46.435000 audit[4917]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376634336430323263353534613163313338383336346633616234 Dec 16 13:00:46.435000 audit: BPF prog-id=234 op=UNLOAD Dec 16 13:00:46.435000 audit[4917]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376634336430323263353534613163313338383336346633616234 Dec 16 13:00:46.435000 audit: BPF prog-id=236 op=LOAD Dec 16 13:00:46.435000 audit[4917]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4905 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376634336430323263353534613163313338383336346633616234 Dec 16 13:00:46.462225 containerd[1618]: time="2025-12-16T13:00:46.462190328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8llz,Uid:1c658f10-e923-42a5-b425-72ee5f2a64c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"317f43d022c554a1c1388364f3ab4e9bcde1e55b2feb81e4fa321a4d22ebe0a6\"" Dec 16 13:00:46.464196 containerd[1618]: time="2025-12-16T13:00:46.464166753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:00:46.506000 audit[4943]: NETFILTER_CFG table=filter:123 family=2 entries=18 op=nft_register_rule pid=4943 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:46.506000 audit[4943]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb6944e50 a2=0 a3=7ffdb6944e3c items=0 ppid=2965 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:46.517000 audit[4943]: NETFILTER_CFG table=nat:124 family=2 entries=52 op=nft_register_chain pid=4943 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:00:46.517000 audit[4943]: SYSCALL arch=c000003e syscall=46 success=yes exit=22668 a0=3 a1=7ffdb6944e50 a2=0 a3=7ffdb6944e3c items=0 ppid=2965 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:00:46.588397 containerd[1618]: time="2025-12-16T13:00:46.588327456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:46.589319 containerd[1618]: time="2025-12-16T13:00:46.589284498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:00:46.589460 containerd[1618]: time="2025-12-16T13:00:46.589366788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:46.589537 kubelet[2813]: E1216 13:00:46.589504 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:00:46.589586 kubelet[2813]: E1216 13:00:46.589544 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:00:46.589653 kubelet[2813]: E1216 13:00:46.589629 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-n8llz_calico-system(1c658f10-e923-42a5-b425-72ee5f2a64c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:46.590980 containerd[1618]: time="2025-12-16T13:00:46.590947952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:00:46.718458 systemd-networkd[1517]: calic7cfd7c152a: Gained IPv6LL Dec 16 13:00:46.724434 containerd[1618]: time="2025-12-16T13:00:46.724086856Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:46.725934 containerd[1618]: time="2025-12-16T13:00:46.725793270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:00:46.726053 containerd[1618]: time="2025-12-16T13:00:46.725879461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:46.726544 kubelet[2813]: E1216 13:00:46.726435 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:00:46.726544 kubelet[2813]: E1216 13:00:46.726501 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:00:46.726778 kubelet[2813]: E1216 13:00:46.726744 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-n8llz_calico-system(1c658f10-e923-42a5-b425-72ee5f2a64c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:46.727211 kubelet[2813]: E1216 13:00:46.727160 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:00:46.811000 audit: BPF prog-id=237 op=LOAD Dec 16 13:00:46.811000 audit[4962]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee7114920 a2=98 a3=1fffffffffffffff items=0 ppid=4945 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.811000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:00:46.812000 audit: BPF prog-id=237 op=UNLOAD Dec 16 13:00:46.812000 audit[4962]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffee71148f0 a3=0 items=0 ppid=4945 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.812000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:00:46.812000 audit: BPF prog-id=238 op=LOAD Dec 16 13:00:46.812000 audit[4962]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee7114800 a2=94 a3=3 items=0 ppid=4945 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.812000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:00:46.812000 audit: BPF prog-id=238 op=UNLOAD Dec 16 13:00:46.812000 audit[4962]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffee7114800 a2=94 a3=3 items=0 ppid=4945 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.812000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:00:46.812000 audit: BPF prog-id=239 op=LOAD Dec 16 13:00:46.812000 audit[4962]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee7114840 a2=94 a3=7ffee7114a20 items=0 ppid=4945 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.812000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:00:46.812000 audit: BPF prog-id=239 op=UNLOAD Dec 16 13:00:46.812000 audit[4962]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffee7114840 a2=94 a3=7ffee7114a20 items=0 ppid=4945 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.812000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:00:46.813000 audit: BPF prog-id=240 op=LOAD Dec 16 13:00:46.813000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe5537b600 a2=98 a3=3 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.813000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:46.814000 audit: BPF prog-id=240 op=UNLOAD Dec 16 13:00:46.814000 audit[4963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe5537b5d0 a3=0 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.814000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:46.814000 audit: BPF prog-id=241 op=LOAD Dec 16 13:00:46.814000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe5537b3f0 a2=94 a3=54428f items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.814000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:46.814000 audit: BPF prog-id=241 op=UNLOAD Dec 16 13:00:46.814000 audit[4963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe5537b3f0 a2=94 a3=54428f items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.814000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:46.814000 audit: BPF prog-id=242 op=LOAD Dec 16 13:00:46.814000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe5537b420 a2=94 a3=2 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.814000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:46.814000 audit: BPF prog-id=242 op=UNLOAD Dec 16 13:00:46.814000 audit[4963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe5537b420 a2=0 a3=2 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.814000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:46.988000 audit: BPF prog-id=243 op=LOAD Dec 16 13:00:46.988000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe5537b2e0 a2=94 a3=1 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.988000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:46.988000 audit: BPF prog-id=243 op=UNLOAD Dec 16 13:00:46.988000 audit[4963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe5537b2e0 a2=94 a3=1 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.988000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:46.999000 audit: BPF prog-id=244 op=LOAD Dec 16 13:00:46.999000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe5537b2d0 a2=94 a3=4 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:46.999000 audit: BPF prog-id=244 op=UNLOAD Dec 16 13:00:46.999000 audit[4963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe5537b2d0 a2=0 a3=4 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:46.999000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:47.000000 audit: BPF prog-id=245 op=LOAD Dec 16 13:00:47.000000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe5537b130 a2=94 a3=5 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.000000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:47.000000 audit: BPF prog-id=245 op=UNLOAD Dec 16 13:00:47.000000 audit[4963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe5537b130 a2=0 a3=5 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.000000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:47.000000 audit: BPF prog-id=246 op=LOAD Dec 16 13:00:47.000000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe5537b350 a2=94 a3=6 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.000000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:47.000000 audit: BPF prog-id=246 op=UNLOAD Dec 16 13:00:47.000000 audit[4963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe5537b350 a2=0 a3=6 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.000000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:47.001000 audit: BPF prog-id=247 op=LOAD Dec 16 13:00:47.001000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe5537ab00 a2=94 a3=88 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.001000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:47.001000 audit: BPF prog-id=248 op=LOAD Dec 16 13:00:47.001000 audit[4963]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe5537a980 a2=94 a3=2 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.001000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:47.001000 audit: BPF prog-id=248 op=UNLOAD Dec 16 13:00:47.001000 audit[4963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe5537a9b0 a2=0 a3=7ffe5537aab0 items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.001000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:47.002000 audit: BPF prog-id=247 op=UNLOAD Dec 16 13:00:47.002000 audit[4963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=18c7bd10 a2=0 a3=34bd063a4fba4dca items=0 ppid=4945 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.002000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:00:47.011000 audit: BPF prog-id=249 op=LOAD Dec 16 13:00:47.011000 audit[4966]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6fa3cbb0 a2=98 a3=1999999999999999 items=0 ppid=4945 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.011000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:00:47.011000 audit: BPF prog-id=249 op=UNLOAD Dec 16 13:00:47.011000 audit[4966]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc6fa3cb80 a3=0 items=0 ppid=4945 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.011000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:00:47.011000 audit: BPF prog-id=250 op=LOAD Dec 16 13:00:47.011000 audit[4966]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6fa3ca90 a2=94 a3=ffff items=0 ppid=4945 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.011000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:00:47.012000 audit: BPF prog-id=250 op=UNLOAD Dec 16 13:00:47.012000 audit[4966]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc6fa3ca90 a2=94 a3=ffff items=0 ppid=4945 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.012000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:00:47.012000 audit: BPF prog-id=251 op=LOAD Dec 16 13:00:47.012000 audit[4966]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6fa3cad0 a2=94 a3=7ffc6fa3ccb0 items=0 ppid=4945 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.012000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:00:47.012000 audit: BPF prog-id=251 op=UNLOAD Dec 16 13:00:47.012000 audit[4966]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc6fa3cad0 a2=94 a3=7ffc6fa3ccb0 items=0 ppid=4945 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.012000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:00:47.080727 systemd-networkd[1517]: vxlan.calico: Link UP Dec 16 13:00:47.080739 systemd-networkd[1517]: vxlan.calico: Gained carrier Dec 16 13:00:47.110000 audit: BPF prog-id=252 op=LOAD Dec 16 13:00:47.110000 audit[4993]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefd215ce0 a2=98 a3=0 items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.110000 audit: BPF prog-id=252 op=UNLOAD Dec 16 13:00:47.110000 audit[4993]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffefd215cb0 a3=0 items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.110000 audit: BPF prog-id=253 op=LOAD Dec 16 13:00:47.110000 audit[4993]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefd215af0 a2=94 a3=54428f items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.110000 audit: BPF prog-id=253 op=UNLOAD Dec 16 13:00:47.110000 audit[4993]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffefd215af0 a2=94 a3=54428f items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.110000 audit: BPF prog-id=254 op=LOAD Dec 16 13:00:47.110000 audit[4993]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefd215b20 a2=94 a3=2 items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.110000 audit: BPF prog-id=254 op=UNLOAD Dec 16 13:00:47.110000 audit[4993]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffefd215b20 a2=0 a3=2 items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.110000 audit: BPF prog-id=255 op=LOAD Dec 16 13:00:47.110000 audit[4993]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefd2158d0 a2=94 a3=4 items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.110000 audit: BPF prog-id=255 op=UNLOAD Dec 16 13:00:47.110000 audit[4993]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffefd2158d0 a2=94 a3=4 items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.110000 audit: BPF prog-id=256 op=LOAD Dec 16 13:00:47.110000 audit[4993]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefd2159d0 a2=94 a3=7ffefd215b50 items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.110000 audit: BPF prog-id=256 op=UNLOAD Dec 16 13:00:47.110000 audit[4993]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffefd2159d0 a2=0 a3=7ffefd215b50 items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.110000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.111000 audit: BPF prog-id=257 op=LOAD Dec 16 13:00:47.111000 audit[4993]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefd215100 a2=94 a3=2 items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.111000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.111000 audit: BPF prog-id=257 op=UNLOAD Dec 16 13:00:47.111000 audit[4993]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffefd215100 a2=0 a3=2 items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.111000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.111000 audit: BPF prog-id=258 op=LOAD Dec 16 13:00:47.111000 audit[4993]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefd215200 a2=94 a3=30 items=0 ppid=4945 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.111000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:00:47.126000 audit: BPF prog-id=259 op=LOAD Dec 16 13:00:47.126000 audit[4998]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda159fba0 a2=98 a3=0 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.126000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.126000 audit: BPF prog-id=259 op=UNLOAD Dec 16 13:00:47.126000 audit[4998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffda159fb70 a3=0 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.126000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.127000 audit: BPF prog-id=260 op=LOAD Dec 16 13:00:47.127000 audit[4998]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda159f990 a2=94 a3=54428f items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.127000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.127000 audit: BPF prog-id=260 op=UNLOAD Dec 16 13:00:47.127000 audit[4998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffda159f990 a2=94 a3=54428f items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.127000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.127000 audit: BPF prog-id=261 op=LOAD Dec 16 13:00:47.127000 audit[4998]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda159f9c0 a2=94 a3=2 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.127000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.127000 audit: BPF prog-id=261 op=UNLOAD Dec 16 13:00:47.127000 audit[4998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffda159f9c0 a2=0 a3=2 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.127000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.293874 systemd-networkd[1517]: calif55941d38cb: Gained IPv6LL Dec 16 13:00:47.300000 audit: BPF prog-id=262 op=LOAD Dec 16 13:00:47.300000 audit[4998]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda159f880 a2=94 a3=1 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.300000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.300000 audit: BPF prog-id=262 op=UNLOAD Dec 16 13:00:47.300000 audit[4998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffda159f880 a2=94 a3=1 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.300000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.308000 audit: BPF prog-id=263 op=LOAD Dec 16 13:00:47.308000 audit[4998]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda159f870 a2=94 a3=4 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.308000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.308000 audit: BPF prog-id=263 op=UNLOAD Dec 16 13:00:47.308000 audit[4998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffda159f870 a2=0 a3=4 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.308000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.309000 audit: BPF prog-id=264 op=LOAD Dec 16 13:00:47.309000 audit[4998]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffda159f6d0 a2=94 a3=5 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.309000 audit: BPF prog-id=264 op=UNLOAD Dec 16 13:00:47.309000 audit[4998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffda159f6d0 a2=0 a3=5 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.309000 audit: BPF prog-id=265 op=LOAD Dec 16 13:00:47.309000 audit[4998]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda159f8f0 a2=94 a3=6 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.309000 audit: BPF prog-id=265 op=UNLOAD Dec 16 13:00:47.309000 audit[4998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffda159f8f0 a2=0 a3=6 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.309000 audit: BPF prog-id=266 op=LOAD Dec 16 13:00:47.309000 audit[4998]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda159f0a0 a2=94 a3=88 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.309000 audit: BPF prog-id=267 op=LOAD Dec 16 13:00:47.309000 audit[4998]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffda159ef20 a2=94 a3=2 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.309000 audit: BPF prog-id=267 op=UNLOAD Dec 16 13:00:47.309000 audit[4998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffda159ef50 a2=0 a3=7ffda159f050 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.310000 audit: BPF prog-id=266 op=UNLOAD Dec 16 13:00:47.310000 audit[4998]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7a98d10 a2=0 a3=5d30abc1707036d6 items=0 ppid=4945 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.310000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:00:47.315000 audit: BPF prog-id=258 op=UNLOAD Dec 16 13:00:47.315000 audit[4945]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000ed0040 a2=0 a3=0 items=0 ppid=4035 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.315000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 13:00:47.388282 kubelet[2813]: E1216 13:00:47.386658 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:47.390029 kubelet[2813]: E1216 13:00:47.389517 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:47.390302 kubelet[2813]: E1216 13:00:47.390279 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:00:47.390751 kubelet[2813]: E1216 13:00:47.390728 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:00:47.412307 kernel: kauditd_printk_skb: 398 callbacks suppressed Dec 16 13:00:47.412385 kernel: audit: type=1325 audit(1765890047.404:742): table=raw:125 family=2 entries=21 op=nft_register_chain pid=5026 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:00:47.404000 audit[5026]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=5026 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:00:47.404000 audit[5026]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd1231e650 a2=0 a3=7ffd1231e63c items=0 ppid=4945 pid=5026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.427907 kernel: audit: type=1300 audit(1765890047.404:742): arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd1231e650 a2=0 a3=7ffd1231e63c items=0 ppid=4945 pid=5026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.427962 kernel: audit: type=1327 audit(1765890047.404:742): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:00:47.404000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:00:47.436000 audit[5027]: NETFILTER_CFG table=mangle:126 family=2 entries=16 op=nft_register_chain pid=5027 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:00:47.442055 kernel: audit: type=1325 audit(1765890047.436:743): table=mangle:126 family=2 entries=16 op=nft_register_chain pid=5027 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:00:47.436000 audit[5027]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff3ce70970 a2=0 a3=7fff3ce7095c items=0 ppid=4945 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.455469 kernel: audit: type=1300 audit(1765890047.436:743): arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff3ce70970 a2=0 a3=7fff3ce7095c items=0 ppid=4945 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.455519 kernel: audit: type=1327 audit(1765890047.436:743): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:00:47.436000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:00:47.467000 audit[5030]: NETFILTER_CFG table=nat:127 family=2 entries=15 op=nft_register_chain pid=5030 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:00:47.482312 kernel: audit: type=1325 audit(1765890047.467:744): table=nat:127 family=2 entries=15 op=nft_register_chain pid=5030 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:00:47.482360 kernel: audit: type=1300 audit(1765890047.467:744): arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdaff8c4f0 a2=0 a3=7ffdaff8c4dc items=0 ppid=4945 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.467000 audit[5030]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdaff8c4f0 a2=0 a3=7ffdaff8c4dc items=0 ppid=4945 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.487663 kernel: audit: type=1327 audit(1765890047.467:744): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:00:47.467000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:00:47.492000 audit[5031]: NETFILTER_CFG table=filter:128 family=2 entries=327 op=nft_register_chain pid=5031 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:00:47.543041 kernel: audit: type=1325 audit(1765890047.492:745): table=filter:128 family=2 entries=327 op=nft_register_chain pid=5031 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:00:47.492000 audit[5031]: SYSCALL arch=c000003e syscall=46 success=yes exit=193468 a0=3 a1=7fff94f8b330 a2=0 a3=7fff94f8b31c items=0 ppid=4945 pid=5031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:00:47.492000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:00:47.997280 systemd-networkd[1517]: cali4fb5e1b653c: Gained IPv6LL Dec 16 13:00:48.381412 systemd-networkd[1517]: vxlan.calico: Gained IPv6LL Dec 16 13:00:48.387116 kubelet[2813]: E1216 13:00:48.386396 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:00:48.388471 kubelet[2813]: E1216 13:00:48.388414 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:00:51.158326 containerd[1618]: time="2025-12-16T13:00:51.158042739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:00:51.296514 containerd[1618]: time="2025-12-16T13:00:51.296438496Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:51.297527 containerd[1618]: time="2025-12-16T13:00:51.297484227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:00:51.297681 containerd[1618]: time="2025-12-16T13:00:51.297564257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:51.297837 kubelet[2813]: E1216 13:00:51.297685 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:00:51.297837 kubelet[2813]: E1216 13:00:51.297723 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:00:51.297837 kubelet[2813]: E1216 13:00:51.297808 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-555f98f96-jlzdx_calico-system(537eba7d-0fc3-4664-af16-ea0352a41fb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:51.300460 containerd[1618]: time="2025-12-16T13:00:51.300402912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:00:51.448622 containerd[1618]: time="2025-12-16T13:00:51.448489616Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:51.449492 containerd[1618]: time="2025-12-16T13:00:51.449458877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:00:51.449552 containerd[1618]: time="2025-12-16T13:00:51.449521277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:51.449710 kubelet[2813]: E1216 13:00:51.449669 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:00:51.449773 kubelet[2813]: E1216 13:00:51.449714 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:00:51.449856 kubelet[2813]: E1216 13:00:51.449809 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-555f98f96-jlzdx_calico-system(537eba7d-0fc3-4664-af16-ea0352a41fb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:51.449942 kubelet[2813]: E1216 13:00:51.449912 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:00:57.159103 containerd[1618]: time="2025-12-16T13:00:57.158407439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:00:57.291453 containerd[1618]: time="2025-12-16T13:00:57.291399084Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:57.292482 containerd[1618]: time="2025-12-16T13:00:57.292406985Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:00:57.292559 containerd[1618]: time="2025-12-16T13:00:57.292483955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:57.292968 kubelet[2813]: E1216 13:00:57.292890 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:57.295674 kubelet[2813]: E1216 13:00:57.293418 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:57.295674 kubelet[2813]: E1216 13:00:57.293619 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-559c85bdcd-wnq6t_calico-apiserver(e3fb7e1f-3535-4337-a579-ff59dbec66d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:57.295674 kubelet[2813]: E1216 13:00:57.293656 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:00:57.296089 containerd[1618]: time="2025-12-16T13:00:57.295993789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:00:57.445950 containerd[1618]: time="2025-12-16T13:00:57.445820373Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:57.446954 containerd[1618]: time="2025-12-16T13:00:57.446927494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:00:57.447001 containerd[1618]: time="2025-12-16T13:00:57.446972644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:57.447121 kubelet[2813]: E1216 13:00:57.447095 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:00:57.447177 kubelet[2813]: E1216 13:00:57.447127 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:00:57.447216 kubelet[2813]: E1216 13:00:57.447185 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-n2rhc_calico-system(d1ac913f-bfd6-4d60-abaa-3d193db00d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:57.447244 kubelet[2813]: E1216 13:00:57.447214 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:00:58.160707 containerd[1618]: time="2025-12-16T13:00:58.159354189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:00:58.295843 containerd[1618]: time="2025-12-16T13:00:58.295760848Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:58.296970 containerd[1618]: time="2025-12-16T13:00:58.296859529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:00:58.296970 containerd[1618]: time="2025-12-16T13:00:58.296940109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:58.297190 kubelet[2813]: E1216 13:00:58.297133 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:00:58.297523 kubelet[2813]: E1216 13:00:58.297197 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:00:58.297523 kubelet[2813]: E1216 13:00:58.297295 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-579dd74c6f-9xbhn_calico-system(7a67396c-f7d6-4b07-9621-f2a99ef21577): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:58.297523 kubelet[2813]: E1216 13:00:58.297337 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:00:59.158287 containerd[1618]: time="2025-12-16T13:00:59.158224386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:00:59.288451 containerd[1618]: time="2025-12-16T13:00:59.288380999Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:00:59.289349 containerd[1618]: time="2025-12-16T13:00:59.289324470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:00:59.289412 containerd[1618]: time="2025-12-16T13:00:59.289390960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:00:59.289551 kubelet[2813]: E1216 13:00:59.289520 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:59.289600 kubelet[2813]: E1216 13:00:59.289563 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:00:59.289664 kubelet[2813]: E1216 13:00:59.289646 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-559c85bdcd-glvwq_calico-apiserver(3c1f37f8-b232-4ab7-9b50-17ad83754886): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:00:59.289718 kubelet[2813]: E1216 13:00:59.289680 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:01:03.157996 containerd[1618]: time="2025-12-16T13:01:03.157849224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:01:03.295944 containerd[1618]: time="2025-12-16T13:01:03.295881253Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:01:03.296881 containerd[1618]: time="2025-12-16T13:01:03.296790444Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:01:03.296881 containerd[1618]: time="2025-12-16T13:01:03.296836724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:01:03.297115 kubelet[2813]: E1216 13:01:03.297052 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:01:03.297115 kubelet[2813]: E1216 13:01:03.297098 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:01:03.298093 kubelet[2813]: E1216 13:01:03.297185 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-n8llz_calico-system(1c658f10-e923-42a5-b425-72ee5f2a64c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:01:03.300266 containerd[1618]: time="2025-12-16T13:01:03.299594236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:01:03.434405 containerd[1618]: time="2025-12-16T13:01:03.434302042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:01:03.435215 containerd[1618]: time="2025-12-16T13:01:03.435185903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:01:03.435293 containerd[1618]: time="2025-12-16T13:01:03.435233423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:01:03.435369 kubelet[2813]: E1216 13:01:03.435332 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:01:03.435405 kubelet[2813]: E1216 13:01:03.435374 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:01:03.435456 kubelet[2813]: E1216 13:01:03.435437 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-n8llz_calico-system(1c658f10-e923-42a5-b425-72ee5f2a64c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:01:03.435513 kubelet[2813]: E1216 13:01:03.435471 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:01:05.158335 kubelet[2813]: E1216 13:01:05.158272 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:01:07.381476 kubelet[2813]: E1216 13:01:07.381438 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:01:09.158707 kubelet[2813]: E1216 13:01:09.158413 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:01:11.159134 kubelet[2813]: E1216 13:01:11.159056 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:01:11.159864 kubelet[2813]: E1216 13:01:11.159583 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:01:12.162223 kubelet[2813]: E1216 13:01:12.162126 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:01:13.158216 kubelet[2813]: E1216 13:01:13.158157 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:01:17.157811 containerd[1618]: time="2025-12-16T13:01:17.157772996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:01:17.306347 containerd[1618]: time="2025-12-16T13:01:17.306274083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:01:17.307310 containerd[1618]: time="2025-12-16T13:01:17.307264658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:01:17.307404 containerd[1618]: time="2025-12-16T13:01:17.307350045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:01:17.308511 kubelet[2813]: E1216 13:01:17.307565 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:01:17.308511 kubelet[2813]: E1216 13:01:17.307614 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:01:17.308511 kubelet[2813]: E1216 13:01:17.307688 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-555f98f96-jlzdx_calico-system(537eba7d-0fc3-4664-af16-ea0352a41fb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:01:17.311417 containerd[1618]: time="2025-12-16T13:01:17.310770142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:01:17.443613 containerd[1618]: time="2025-12-16T13:01:17.442843008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:01:17.444548 containerd[1618]: time="2025-12-16T13:01:17.444329105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:01:17.444625 containerd[1618]: time="2025-12-16T13:01:17.444412062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:01:17.446193 kubelet[2813]: E1216 13:01:17.446138 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:01:17.446193 kubelet[2813]: E1216 13:01:17.446193 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:01:17.446659 kubelet[2813]: E1216 13:01:17.446630 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-555f98f96-jlzdx_calico-system(537eba7d-0fc3-4664-af16-ea0352a41fb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:01:17.446701 kubelet[2813]: E1216 13:01:17.446684 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:01:18.161338 kubelet[2813]: E1216 13:01:18.160783 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:01:20.159050 kubelet[2813]: E1216 13:01:20.158376 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:01:20.161382 containerd[1618]: time="2025-12-16T13:01:20.161090638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:01:20.292152 containerd[1618]: time="2025-12-16T13:01:20.292104431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:01:20.293029 containerd[1618]: time="2025-12-16T13:01:20.292970932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:01:20.293100 containerd[1618]: time="2025-12-16T13:01:20.293076849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:01:20.293316 kubelet[2813]: E1216 13:01:20.293277 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:01:20.293367 kubelet[2813]: E1216 13:01:20.293343 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:01:20.293639 kubelet[2813]: E1216 13:01:20.293456 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-n2rhc_calico-system(d1ac913f-bfd6-4d60-abaa-3d193db00d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:01:20.293639 kubelet[2813]: E1216 13:01:20.293605 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:01:22.159689 containerd[1618]: time="2025-12-16T13:01:22.159181509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:01:22.304003 containerd[1618]: time="2025-12-16T13:01:22.303830782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:01:22.304821 containerd[1618]: time="2025-12-16T13:01:22.304708374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:01:22.304821 containerd[1618]: time="2025-12-16T13:01:22.304797152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:01:22.305687 kubelet[2813]: E1216 13:01:22.305372 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:01:22.305687 kubelet[2813]: E1216 13:01:22.305647 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:01:22.306741 kubelet[2813]: E1216 13:01:22.306251 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-559c85bdcd-wnq6t_calico-apiserver(e3fb7e1f-3535-4337-a579-ff59dbec66d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:01:22.306741 kubelet[2813]: E1216 13:01:22.306519 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:01:24.162861 containerd[1618]: time="2025-12-16T13:01:24.162801497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:01:24.291205 containerd[1618]: time="2025-12-16T13:01:24.291155579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:01:24.292173 containerd[1618]: time="2025-12-16T13:01:24.292144630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:01:24.292248 containerd[1618]: time="2025-12-16T13:01:24.292213548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:01:24.292522 kubelet[2813]: E1216 13:01:24.292451 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:01:24.293024 kubelet[2813]: E1216 13:01:24.292529 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:01:24.293024 kubelet[2813]: E1216 13:01:24.292739 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-579dd74c6f-9xbhn_calico-system(7a67396c-f7d6-4b07-9621-f2a99ef21577): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:01:24.293024 kubelet[2813]: E1216 13:01:24.292773 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:01:24.294053 containerd[1618]: time="2025-12-16T13:01:24.293978485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:01:24.431081 containerd[1618]: time="2025-12-16T13:01:24.430931772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:01:24.432095 containerd[1618]: time="2025-12-16T13:01:24.432065999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:01:24.432217 containerd[1618]: time="2025-12-16T13:01:24.432147006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:01:24.432602 kubelet[2813]: E1216 13:01:24.432562 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:01:24.432673 kubelet[2813]: E1216 13:01:24.432612 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:01:24.433130 kubelet[2813]: E1216 13:01:24.432767 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-559c85bdcd-glvwq_calico-apiserver(3c1f37f8-b232-4ab7-9b50-17ad83754886): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:01:24.433130 kubelet[2813]: E1216 13:01:24.432823 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:01:28.157957 kubelet[2813]: E1216 13:01:28.157604 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:01:29.157436 kubelet[2813]: E1216 13:01:29.157379 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:01:31.161287 containerd[1618]: time="2025-12-16T13:01:31.158951725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:01:31.395052 containerd[1618]: time="2025-12-16T13:01:31.394887373Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:01:31.396677 containerd[1618]: time="2025-12-16T13:01:31.396457234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:01:31.396677 containerd[1618]: time="2025-12-16T13:01:31.396536432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:01:31.396750 kubelet[2813]: E1216 13:01:31.396694 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:01:31.396750 kubelet[2813]: E1216 13:01:31.396737 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:01:31.397144 kubelet[2813]: E1216 13:01:31.396821 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-n8llz_calico-system(1c658f10-e923-42a5-b425-72ee5f2a64c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:01:31.398208 containerd[1618]: time="2025-12-16T13:01:31.398176681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:01:31.537190 containerd[1618]: time="2025-12-16T13:01:31.536908082Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:01:31.538351 containerd[1618]: time="2025-12-16T13:01:31.538286388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:01:31.538846 containerd[1618]: time="2025-12-16T13:01:31.538339887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:01:31.538945 kubelet[2813]: E1216 13:01:31.538893 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:01:31.538991 kubelet[2813]: E1216 13:01:31.538950 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:01:31.539293 kubelet[2813]: E1216 13:01:31.539256 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-n8llz_calico-system(1c658f10-e923-42a5-b425-72ee5f2a64c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:01:31.539458 kubelet[2813]: E1216 13:01:31.539303 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:01:33.160372 kubelet[2813]: E1216 13:01:33.160249 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:01:35.160054 kubelet[2813]: E1216 13:01:35.159159 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:01:35.160696 kubelet[2813]: E1216 13:01:35.160137 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:01:37.157891 kubelet[2813]: E1216 13:01:37.157847 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:01:39.161104 kubelet[2813]: E1216 13:01:39.158525 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:01:43.159634 kubelet[2813]: E1216 13:01:43.158751 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:01:45.159125 kubelet[2813]: E1216 13:01:45.158765 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:01:48.158407 kubelet[2813]: E1216 13:01:48.158370 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:01:50.159346 kubelet[2813]: E1216 13:01:50.158467 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:01:52.161464 kubelet[2813]: E1216 13:01:52.161324 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:01:53.159084 kubelet[2813]: E1216 13:01:53.158143 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:01:56.159657 kubelet[2813]: E1216 13:01:56.159607 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:01:56.160778 kubelet[2813]: E1216 13:01:56.159951 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:01:59.156798 kubelet[2813]: E1216 13:01:59.156748 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:02:01.158403 kubelet[2813]: E1216 13:02:01.158361 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:02:03.159358 containerd[1618]: time="2025-12-16T13:02:03.159234278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:02:03.312833 containerd[1618]: time="2025-12-16T13:02:03.312782070Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:02:03.313931 containerd[1618]: time="2025-12-16T13:02:03.313897367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:02:03.313990 containerd[1618]: time="2025-12-16T13:02:03.313981246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:02:03.314198 kubelet[2813]: E1216 13:02:03.314163 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:02:03.314599 kubelet[2813]: E1216 13:02:03.314208 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:02:03.314599 kubelet[2813]: E1216 13:02:03.314371 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-n2rhc_calico-system(d1ac913f-bfd6-4d60-abaa-3d193db00d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:02:03.314599 kubelet[2813]: E1216 13:02:03.314403 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:02:03.315771 containerd[1618]: time="2025-12-16T13:02:03.315749684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:02:03.476152 containerd[1618]: time="2025-12-16T13:02:03.476025575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:02:03.476965 containerd[1618]: time="2025-12-16T13:02:03.476937344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:02:03.477046 containerd[1618]: time="2025-12-16T13:02:03.477028353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:02:03.477243 kubelet[2813]: E1216 13:02:03.477207 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:02:03.477288 kubelet[2813]: E1216 13:02:03.477252 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:02:03.477338 kubelet[2813]: E1216 13:02:03.477314 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-559c85bdcd-wnq6t_calico-apiserver(e3fb7e1f-3535-4337-a579-ff59dbec66d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:02:03.477367 kubelet[2813]: E1216 13:02:03.477346 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:02:07.156697 kubelet[2813]: E1216 13:02:07.156656 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:02:08.164057 containerd[1618]: time="2025-12-16T13:02:08.163842177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:02:08.306156 containerd[1618]: time="2025-12-16T13:02:08.306098915Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:02:08.307341 containerd[1618]: time="2025-12-16T13:02:08.307291732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:02:08.307409 containerd[1618]: time="2025-12-16T13:02:08.307376271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:02:08.307565 kubelet[2813]: E1216 13:02:08.307527 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:02:08.307923 kubelet[2813]: E1216 13:02:08.307571 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:02:08.307923 kubelet[2813]: E1216 13:02:08.307634 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-579dd74c6f-9xbhn_calico-system(7a67396c-f7d6-4b07-9621-f2a99ef21577): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:02:08.307923 kubelet[2813]: E1216 13:02:08.307663 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:02:10.159130 kubelet[2813]: E1216 13:02:10.159086 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:02:11.159611 containerd[1618]: time="2025-12-16T13:02:11.159203063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:02:11.301779 containerd[1618]: time="2025-12-16T13:02:11.301616308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:02:11.302849 containerd[1618]: time="2025-12-16T13:02:11.302694567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:02:11.302849 containerd[1618]: time="2025-12-16T13:02:11.302789246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:02:11.303055 kubelet[2813]: E1216 13:02:11.302986 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:02:11.303055 kubelet[2813]: E1216 13:02:11.303052 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:02:11.304139 kubelet[2813]: E1216 13:02:11.303124 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-555f98f96-jlzdx_calico-system(537eba7d-0fc3-4664-af16-ea0352a41fb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:02:11.304902 containerd[1618]: time="2025-12-16T13:02:11.304427909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:02:11.470345 containerd[1618]: time="2025-12-16T13:02:11.469774616Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:02:11.471393 containerd[1618]: time="2025-12-16T13:02:11.471352749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:02:11.471458 containerd[1618]: time="2025-12-16T13:02:11.471448108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:02:11.471738 kubelet[2813]: E1216 13:02:11.471696 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:02:11.471799 kubelet[2813]: E1216 13:02:11.471777 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:02:11.472148 kubelet[2813]: E1216 13:02:11.472125 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-555f98f96-jlzdx_calico-system(537eba7d-0fc3-4664-af16-ea0352a41fb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:02:11.472497 kubelet[2813]: E1216 13:02:11.472212 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:02:12.158717 containerd[1618]: time="2025-12-16T13:02:12.158129978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:02:12.304970 containerd[1618]: time="2025-12-16T13:02:12.304904644Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:02:12.323986 containerd[1618]: time="2025-12-16T13:02:12.323913689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:02:12.324111 containerd[1618]: time="2025-12-16T13:02:12.324088038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:02:12.324330 kubelet[2813]: E1216 13:02:12.324276 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:02:12.324330 kubelet[2813]: E1216 13:02:12.324323 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:02:12.325452 kubelet[2813]: E1216 13:02:12.324395 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-559c85bdcd-glvwq_calico-apiserver(3c1f37f8-b232-4ab7-9b50-17ad83754886): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:02:12.325452 kubelet[2813]: E1216 13:02:12.324424 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:02:16.159883 kubelet[2813]: E1216 13:02:16.159673 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:02:17.157371 kubelet[2813]: E1216 13:02:17.157325 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:02:18.162056 kubelet[2813]: E1216 13:02:18.161998 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:02:19.159660 kubelet[2813]: E1216 13:02:19.159411 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:02:22.160342 kubelet[2813]: E1216 13:02:22.160247 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:02:22.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.239.193.244:22-147.75.109.163:38474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:22.289957 systemd[1]: Started sshd@7-172.239.193.244:22-147.75.109.163:38474.service - OpenSSH per-connection server daemon (147.75.109.163:38474). Dec 16 13:02:22.292201 kernel: kauditd_printk_skb: 2 callbacks suppressed Dec 16 13:02:22.292274 kernel: audit: type=1130 audit(1765890142.289:746): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.239.193.244:22-147.75.109.163:38474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:22.669000 audit[5178]: USER_ACCT pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:22.672373 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:22.677140 sshd[5178]: Accepted publickey for core from 147.75.109.163 port 38474 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 13:02:22.669000 audit[5178]: CRED_ACQ pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:22.680569 kernel: audit: type=1101 audit(1765890142.669:747): pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:22.680619 kernel: audit: type=1103 audit(1765890142.669:748): pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:22.669000 audit[5178]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf615ef50 a2=3 a3=0 items=0 ppid=1 pid=5178 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:22.692097 kernel: audit: type=1006 audit(1765890142.669:749): pid=5178 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 16 13:02:22.692151 kernel: audit: type=1300 audit(1765890142.669:749): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf615ef50 a2=3 a3=0 items=0 ppid=1 pid=5178 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:22.690742 systemd-logind[1599]: New session 8 of user core. Dec 16 13:02:22.669000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:22.698388 kernel: audit: type=1327 audit(1765890142.669:749): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:22.703158 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:02:22.707000 audit[5178]: USER_START pid=5178 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:22.717038 kernel: audit: type=1105 audit(1765890142.707:750): pid=5178 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:22.711000 audit[5181]: CRED_ACQ pid=5181 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:22.725367 kernel: audit: type=1103 audit(1765890142.711:751): pid=5181 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:23.003159 sshd[5181]: Connection closed by 147.75.109.163 port 38474 Dec 16 13:02:23.004239 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:23.006000 audit[5178]: USER_END pid=5178 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:23.019050 kernel: audit: type=1106 audit(1765890143.006:752): pid=5178 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:23.020421 systemd[1]: sshd@7-172.239.193.244:22-147.75.109.163:38474.service: Deactivated successfully. Dec 16 13:02:23.023870 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:02:23.006000 audit[5178]: CRED_DISP pid=5178 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:23.029664 systemd-logind[1599]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:02:23.031841 systemd-logind[1599]: Removed session 8. Dec 16 13:02:23.034144 kernel: audit: type=1104 audit(1765890143.006:753): pid=5178 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:23.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.239.193.244:22-147.75.109.163:38474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:23.159878 kubelet[2813]: E1216 13:02:23.159831 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:02:24.157247 kubelet[2813]: E1216 13:02:24.156852 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:02:25.159616 containerd[1618]: time="2025-12-16T13:02:25.159551726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:02:25.309640 containerd[1618]: time="2025-12-16T13:02:25.309588893Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:02:25.310618 containerd[1618]: time="2025-12-16T13:02:25.310483125Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:02:25.310618 containerd[1618]: time="2025-12-16T13:02:25.310568684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:02:25.310796 kubelet[2813]: E1216 13:02:25.310695 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:02:25.310796 kubelet[2813]: E1216 13:02:25.310736 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:02:25.313095 kubelet[2813]: E1216 13:02:25.310813 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-n8llz_calico-system(1c658f10-e923-42a5-b425-72ee5f2a64c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:02:25.313218 containerd[1618]: time="2025-12-16T13:02:25.313184672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:02:25.448553 containerd[1618]: time="2025-12-16T13:02:25.448418014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:02:25.449236 containerd[1618]: time="2025-12-16T13:02:25.449191437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:02:25.449281 containerd[1618]: time="2025-12-16T13:02:25.449262937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:02:25.449661 kubelet[2813]: E1216 13:02:25.449504 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:02:25.450095 kubelet[2813]: E1216 13:02:25.449785 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:02:25.450095 kubelet[2813]: E1216 13:02:25.449867 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-n8llz_calico-system(1c658f10-e923-42a5-b425-72ee5f2a64c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:02:25.450095 kubelet[2813]: E1216 13:02:25.449902 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:02:26.160277 kubelet[2813]: E1216 13:02:26.159258 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:02:28.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.239.193.244:22-147.75.109.163:38484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:28.080181 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:02:28.080218 kernel: audit: type=1130 audit(1765890148.078:755): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.239.193.244:22-147.75.109.163:38484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:28.079249 systemd[1]: Started sshd@8-172.239.193.244:22-147.75.109.163:38484.service - OpenSSH per-connection server daemon (147.75.109.163:38484). Dec 16 13:02:28.434000 audit[5216]: USER_ACCT pid=5216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:28.438380 sshd-session[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:28.444629 sshd[5216]: Accepted publickey for core from 147.75.109.163 port 38484 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 13:02:28.445300 kernel: audit: type=1101 audit(1765890148.434:756): pid=5216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:28.436000 audit[5216]: CRED_ACQ pid=5216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:28.454334 kernel: audit: type=1103 audit(1765890148.436:757): pid=5216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:28.467720 kernel: audit: type=1006 audit(1765890148.436:758): pid=5216 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 16 13:02:28.467791 kernel: audit: type=1300 audit(1765890148.436:758): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9f466610 a2=3 a3=0 items=0 ppid=1 pid=5216 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:28.436000 audit[5216]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9f466610 a2=3 a3=0 items=0 ppid=1 pid=5216 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:28.462704 systemd-logind[1599]: New session 9 of user core. Dec 16 13:02:28.436000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:28.475468 kernel: audit: type=1327 audit(1765890148.436:758): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:28.476724 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:02:28.482000 audit[5216]: USER_START pid=5216 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:28.493034 kernel: audit: type=1105 audit(1765890148.482:759): pid=5216 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:28.491000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:28.504029 kernel: audit: type=1103 audit(1765890148.491:760): pid=5219 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:28.719516 sshd[5219]: Connection closed by 147.75.109.163 port 38484 Dec 16 13:02:28.723235 sshd-session[5216]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:28.724000 audit[5216]: USER_END pid=5216 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:28.735040 kernel: audit: type=1106 audit(1765890148.724:761): pid=5216 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:28.737424 systemd[1]: sshd@8-172.239.193.244:22-147.75.109.163:38484.service: Deactivated successfully. Dec 16 13:02:28.739724 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:02:28.743278 systemd-logind[1599]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:02:28.724000 audit[5216]: CRED_DISP pid=5216 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:28.751035 kernel: audit: type=1104 audit(1765890148.724:762): pid=5216 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:28.753078 systemd-logind[1599]: Removed session 9. Dec 16 13:02:28.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.239.193.244:22-147.75.109.163:38484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:30.161038 kubelet[2813]: E1216 13:02:30.160526 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:02:30.161450 kubelet[2813]: E1216 13:02:30.161113 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:02:32.159077 kubelet[2813]: E1216 13:02:32.158918 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:02:32.160070 kubelet[2813]: E1216 13:02:32.159972 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:02:33.158249 kubelet[2813]: E1216 13:02:33.158201 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:02:33.803740 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:02:33.803828 kernel: audit: type=1130 audit(1765890153.794:764): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.239.193.244:22-147.75.109.163:34618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:33.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.239.193.244:22-147.75.109.163:34618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:33.795150 systemd[1]: Started sshd@9-172.239.193.244:22-147.75.109.163:34618.service - OpenSSH per-connection server daemon (147.75.109.163:34618). Dec 16 13:02:34.139000 audit[5232]: USER_ACCT pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.142493 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:34.145448 sshd[5232]: Accepted publickey for core from 147.75.109.163 port 34618 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 13:02:34.148154 kernel: audit: type=1101 audit(1765890154.139:765): pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.140000 audit[5232]: CRED_ACQ pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.167614 systemd-logind[1599]: New session 10 of user core. Dec 16 13:02:34.177511 kernel: audit: type=1103 audit(1765890154.140:766): pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.177682 kernel: audit: type=1006 audit(1765890154.140:767): pid=5232 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 13:02:34.178212 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:02:34.140000 audit[5232]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff219b9f90 a2=3 a3=0 items=0 ppid=1 pid=5232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:34.191043 kernel: audit: type=1300 audit(1765890154.140:767): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff219b9f90 a2=3 a3=0 items=0 ppid=1 pid=5232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:34.140000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:34.184000 audit[5232]: USER_START pid=5232 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.198314 kernel: audit: type=1327 audit(1765890154.140:767): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:34.198366 kernel: audit: type=1105 audit(1765890154.184:768): pid=5232 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.192000 audit[5235]: CRED_ACQ pid=5235 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.212030 kernel: audit: type=1103 audit(1765890154.192:769): pid=5235 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.389250 sshd[5235]: Connection closed by 147.75.109.163 port 34618 Dec 16 13:02:34.390354 sshd-session[5232]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:34.390000 audit[5232]: USER_END pid=5232 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.401048 kernel: audit: type=1106 audit(1765890154.390:770): pid=5232 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.395760 systemd-logind[1599]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:02:34.397760 systemd[1]: sshd@9-172.239.193.244:22-147.75.109.163:34618.service: Deactivated successfully. Dec 16 13:02:34.400897 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:02:34.391000 audit[5232]: CRED_DISP pid=5232 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.404478 systemd-logind[1599]: Removed session 10. Dec 16 13:02:34.408057 kernel: audit: type=1104 audit(1765890154.391:771): pid=5232 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.239.193.244:22-147.75.109.163:34618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:34.457941 systemd[1]: Started sshd@10-172.239.193.244:22-147.75.109.163:34628.service - OpenSSH per-connection server daemon (147.75.109.163:34628). Dec 16 13:02:34.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.239.193.244:22-147.75.109.163:34628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:34.790000 audit[5249]: USER_ACCT pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.793998 sshd[5249]: Accepted publickey for core from 147.75.109.163 port 34628 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 13:02:34.795000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.795000 audit[5249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc12011c60 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:34.795000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:34.797277 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:34.806586 systemd-logind[1599]: New session 11 of user core. Dec 16 13:02:34.810146 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:02:34.813000 audit[5249]: USER_START pid=5249 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:34.817000 audit[5255]: CRED_ACQ pid=5255 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:35.089834 sshd[5255]: Connection closed by 147.75.109.163 port 34628 Dec 16 13:02:35.091571 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:35.092000 audit[5249]: USER_END pid=5249 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:35.093000 audit[5249]: CRED_DISP pid=5249 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:35.097693 systemd-logind[1599]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:02:35.098698 systemd[1]: sshd@10-172.239.193.244:22-147.75.109.163:34628.service: Deactivated successfully. Dec 16 13:02:35.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.239.193.244:22-147.75.109.163:34628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:35.101735 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:02:35.105770 systemd-logind[1599]: Removed session 11. Dec 16 13:02:35.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.239.193.244:22-147.75.109.163:34640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:35.163520 systemd[1]: Started sshd@11-172.239.193.244:22-147.75.109.163:34640.service - OpenSSH per-connection server daemon (147.75.109.163:34640). Dec 16 13:02:35.521000 audit[5265]: USER_ACCT pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:35.523118 sshd[5265]: Accepted publickey for core from 147.75.109.163 port 34640 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 13:02:35.523000 audit[5265]: CRED_ACQ pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:35.523000 audit[5265]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec7c4d9d0 a2=3 a3=0 items=0 ppid=1 pid=5265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:35.523000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:35.525197 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:35.534224 systemd-logind[1599]: New session 12 of user core. Dec 16 13:02:35.537266 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:02:35.541000 audit[5265]: USER_START pid=5265 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:35.544000 audit[5268]: CRED_ACQ pid=5268 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:35.816571 sshd[5268]: Connection closed by 147.75.109.163 port 34640 Dec 16 13:02:35.819201 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:35.819000 audit[5265]: USER_END pid=5265 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:35.819000 audit[5265]: CRED_DISP pid=5265 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:35.824290 systemd-logind[1599]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:02:35.825594 systemd[1]: sshd@11-172.239.193.244:22-147.75.109.163:34640.service: Deactivated successfully. Dec 16 13:02:35.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.239.193.244:22-147.75.109.163:34640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:35.828233 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:02:35.830257 systemd-logind[1599]: Removed session 12. Dec 16 13:02:38.159663 kubelet[2813]: E1216 13:02:38.159619 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:02:40.160897 kubelet[2813]: E1216 13:02:40.160840 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:02:40.162486 kubelet[2813]: E1216 13:02:40.162239 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:02:40.894254 systemd[1]: Started sshd@12-172.239.193.244:22-147.75.109.163:34654.service - OpenSSH per-connection server daemon (147.75.109.163:34654). Dec 16 13:02:40.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.239.193.244:22-147.75.109.163:34654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:40.895295 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 13:02:40.895349 kernel: audit: type=1130 audit(1765890160.893:791): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.239.193.244:22-147.75.109.163:34654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:41.226000 audit[5305]: USER_ACCT pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.235062 kernel: audit: type=1101 audit(1765890161.226:792): pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.229654 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:41.235532 sshd[5305]: Accepted publickey for core from 147.75.109.163 port 34654 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 13:02:41.228000 audit[5305]: CRED_ACQ pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.248039 kernel: audit: type=1103 audit(1765890161.228:793): pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.244291 systemd-logind[1599]: New session 13 of user core. Dec 16 13:02:41.262662 kernel: audit: type=1006 audit(1765890161.228:794): pid=5305 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 13:02:41.262714 kernel: audit: type=1300 audit(1765890161.228:794): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddb55bd30 a2=3 a3=0 items=0 ppid=1 pid=5305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:41.228000 audit[5305]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddb55bd30 a2=3 a3=0 items=0 ppid=1 pid=5305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:41.255158 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:02:41.266221 kernel: audit: type=1327 audit(1765890161.228:794): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:41.228000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:41.274901 kernel: audit: type=1105 audit(1765890161.265:795): pid=5305 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.265000 audit[5305]: USER_START pid=5305 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.274000 audit[5308]: CRED_ACQ pid=5308 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.282062 kernel: audit: type=1103 audit(1765890161.274:796): pid=5308 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.471513 sshd[5308]: Connection closed by 147.75.109.163 port 34654 Dec 16 13:02:41.473284 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:41.474000 audit[5305]: USER_END pid=5305 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.477477 systemd-logind[1599]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:02:41.479885 systemd[1]: sshd@12-172.239.193.244:22-147.75.109.163:34654.service: Deactivated successfully. Dec 16 13:02:41.482193 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:02:41.484157 kernel: audit: type=1106 audit(1765890161.474:797): pid=5305 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.485127 systemd-logind[1599]: Removed session 13. Dec 16 13:02:41.474000 audit[5305]: CRED_DISP pid=5305 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.239.193.244:22-147.75.109.163:34654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:41.497036 kernel: audit: type=1104 audit(1765890161.474:798): pid=5305 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.537315 systemd[1]: Started sshd@13-172.239.193.244:22-147.75.109.163:34656.service - OpenSSH per-connection server daemon (147.75.109.163:34656). Dec 16 13:02:41.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.239.193.244:22-147.75.109.163:34656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:41.861000 audit[5320]: USER_ACCT pid=5320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.864204 sshd[5320]: Accepted publickey for core from 147.75.109.163 port 34656 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 13:02:41.863000 audit[5320]: CRED_ACQ pid=5320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.863000 audit[5320]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcaceef000 a2=3 a3=0 items=0 ppid=1 pid=5320 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:41.863000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:41.865174 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:41.873842 systemd-logind[1599]: New session 14 of user core. Dec 16 13:02:41.880357 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:02:41.884000 audit[5320]: USER_START pid=5320 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:41.887000 audit[5323]: CRED_ACQ pid=5323 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:42.498681 sshd[5323]: Connection closed by 147.75.109.163 port 34656 Dec 16 13:02:42.499570 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:42.500000 audit[5320]: USER_END pid=5320 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:42.500000 audit[5320]: CRED_DISP pid=5320 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:42.504748 systemd-logind[1599]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:02:42.505346 systemd[1]: sshd@13-172.239.193.244:22-147.75.109.163:34656.service: Deactivated successfully. Dec 16 13:02:42.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.239.193.244:22-147.75.109.163:34656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:42.508830 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:02:42.510953 systemd-logind[1599]: Removed session 14. Dec 16 13:02:42.578822 systemd[1]: Started sshd@14-172.239.193.244:22-147.75.109.163:55614.service - OpenSSH per-connection server daemon (147.75.109.163:55614). Dec 16 13:02:42.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.239.193.244:22-147.75.109.163:55614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:42.938000 audit[5333]: USER_ACCT pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:42.939991 sshd[5333]: Accepted publickey for core from 147.75.109.163 port 55614 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 13:02:42.940000 audit[5333]: CRED_ACQ pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:42.940000 audit[5333]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb976b270 a2=3 a3=0 items=0 ppid=1 pid=5333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:42.940000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:42.942431 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:42.947972 systemd-logind[1599]: New session 15 of user core. Dec 16 13:02:42.954141 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:02:42.958000 audit[5333]: USER_START pid=5333 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:42.964000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:43.710000 audit[5346]: NETFILTER_CFG table=filter:129 family=2 entries=26 op=nft_register_rule pid=5346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:02:43.710000 audit[5346]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe6bf0b510 a2=0 a3=7ffe6bf0b4fc items=0 ppid=2965 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:43.710000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:02:43.716000 audit[5346]: NETFILTER_CFG table=nat:130 family=2 entries=20 op=nft_register_rule pid=5346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:02:43.716000 audit[5346]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe6bf0b510 a2=0 a3=0 items=0 ppid=2965 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:43.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:02:43.767557 sshd[5336]: Connection closed by 147.75.109.163 port 55614 Dec 16 13:02:43.770049 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:43.771000 audit[5333]: USER_END pid=5333 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:43.771000 audit[5333]: CRED_DISP pid=5333 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:43.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.239.193.244:22-147.75.109.163:55614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:43.775257 systemd-logind[1599]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:02:43.775353 systemd[1]: sshd@14-172.239.193.244:22-147.75.109.163:55614.service: Deactivated successfully. Dec 16 13:02:43.778366 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:02:43.782180 systemd-logind[1599]: Removed session 15. Dec 16 13:02:43.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.239.193.244:22-147.75.109.163:55628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:43.836837 systemd[1]: Started sshd@15-172.239.193.244:22-147.75.109.163:55628.service - OpenSSH per-connection server daemon (147.75.109.163:55628). Dec 16 13:02:44.191000 audit[5351]: USER_ACCT pid=5351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:44.193698 sshd[5351]: Accepted publickey for core from 147.75.109.163 port 55628 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 13:02:44.193000 audit[5351]: CRED_ACQ pid=5351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:44.193000 audit[5351]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb11f9500 a2=3 a3=0 items=0 ppid=1 pid=5351 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:44.193000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:44.195873 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:44.204736 systemd-logind[1599]: New session 16 of user core. Dec 16 13:02:44.209207 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:02:44.214000 audit[5351]: USER_START pid=5351 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:44.216000 audit[5354]: CRED_ACQ pid=5354 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:44.575933 sshd[5354]: Connection closed by 147.75.109.163 port 55628 Dec 16 13:02:44.578072 sshd-session[5351]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:44.581000 audit[5351]: USER_END pid=5351 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:44.581000 audit[5351]: CRED_DISP pid=5351 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:44.588083 systemd[1]: sshd@15-172.239.193.244:22-147.75.109.163:55628.service: Deactivated successfully. Dec 16 13:02:44.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.239.193.244:22-147.75.109.163:55628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:44.592388 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:02:44.594751 systemd-logind[1599]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:02:44.597942 systemd-logind[1599]: Removed session 16. Dec 16 13:02:44.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.239.193.244:22-147.75.109.163:55638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:44.650436 systemd[1]: Started sshd@16-172.239.193.244:22-147.75.109.163:55638.service - OpenSSH per-connection server daemon (147.75.109.163:55638). Dec 16 13:02:44.737000 audit[5368]: NETFILTER_CFG table=filter:131 family=2 entries=38 op=nft_register_rule pid=5368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:02:44.737000 audit[5368]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe7b6db740 a2=0 a3=7ffe7b6db72c items=0 ppid=2965 pid=5368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:44.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:02:44.746000 audit[5368]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=5368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:02:44.746000 audit[5368]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe7b6db740 a2=0 a3=0 items=0 ppid=2965 pid=5368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:44.746000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:02:44.990000 audit[5364]: USER_ACCT pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:44.995326 sshd[5364]: Accepted publickey for core from 147.75.109.163 port 55638 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 13:02:44.997000 audit[5364]: CRED_ACQ pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:44.997000 audit[5364]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff646dfa80 a2=3 a3=0 items=0 ppid=1 pid=5364 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:44.997000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:44.998630 sshd-session[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:45.009217 systemd-logind[1599]: New session 17 of user core. Dec 16 13:02:45.015687 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:02:45.018000 audit[5364]: USER_START pid=5364 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:45.023000 audit[5369]: CRED_ACQ pid=5369 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:45.231221 sshd[5369]: Connection closed by 147.75.109.163 port 55638 Dec 16 13:02:45.232351 sshd-session[5364]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:45.234000 audit[5364]: USER_END pid=5364 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:45.234000 audit[5364]: CRED_DISP pid=5364 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:45.241276 systemd[1]: sshd@16-172.239.193.244:22-147.75.109.163:55638.service: Deactivated successfully. Dec 16 13:02:45.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.239.193.244:22-147.75.109.163:55638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:45.245760 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:02:45.249405 systemd-logind[1599]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:02:45.250837 systemd-logind[1599]: Removed session 17. Dec 16 13:02:46.162416 kubelet[2813]: E1216 13:02:46.162345 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-n2rhc" podUID="d1ac913f-bfd6-4d60-abaa-3d193db00d41" Dec 16 13:02:46.165764 kubelet[2813]: E1216 13:02:46.165730 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-wnq6t" podUID="e3fb7e1f-3535-4337-a579-ff59dbec66d0" Dec 16 13:02:47.158484 kubelet[2813]: E1216 13:02:47.158445 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577" Dec 16 13:02:48.218647 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 13:02:48.218790 kernel: audit: type=1325 audit(1765890168.206:840): table=filter:133 family=2 entries=26 op=nft_register_rule pid=5381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:02:48.206000 audit[5381]: NETFILTER_CFG table=filter:133 family=2 entries=26 op=nft_register_rule pid=5381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:02:48.206000 audit[5381]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff98a6ded0 a2=0 a3=7fff98a6debc items=0 ppid=2965 pid=5381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:48.231129 kernel: audit: type=1300 audit(1765890168.206:840): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff98a6ded0 a2=0 a3=7fff98a6debc items=0 ppid=2965 pid=5381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:48.231270 kernel: audit: type=1327 audit(1765890168.206:840): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:02:48.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:02:48.233000 audit[5381]: NETFILTER_CFG table=nat:134 family=2 entries=104 op=nft_register_chain pid=5381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:02:48.247732 kernel: audit: type=1325 audit(1765890168.233:841): table=nat:134 family=2 entries=104 op=nft_register_chain pid=5381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:02:48.247792 kernel: audit: type=1300 audit(1765890168.233:841): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff98a6ded0 a2=0 a3=7fff98a6debc items=0 ppid=2965 pid=5381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:48.233000 audit[5381]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff98a6ded0 a2=0 a3=7fff98a6debc items=0 ppid=2965 pid=5381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:48.251053 kernel: audit: type=1327 audit(1765890168.233:841): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:02:48.233000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:02:50.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.239.193.244:22-147.75.109.163:55652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:50.306457 systemd[1]: Started sshd@17-172.239.193.244:22-147.75.109.163:55652.service - OpenSSH per-connection server daemon (147.75.109.163:55652). Dec 16 13:02:50.314161 kernel: audit: type=1130 audit(1765890170.305:842): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.239.193.244:22-147.75.109.163:55652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:50.649000 audit[5383]: USER_ACCT pid=5383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:50.652727 sshd-session[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:50.656969 sshd[5383]: Accepted publickey for core from 147.75.109.163 port 55652 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 13:02:50.662031 kernel: audit: type=1101 audit(1765890170.649:843): pid=5383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:50.651000 audit[5383]: CRED_ACQ pid=5383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:50.674531 kernel: audit: type=1103 audit(1765890170.651:844): pid=5383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:50.674584 kernel: audit: type=1006 audit(1765890170.651:845): pid=5383 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 13:02:50.651000 audit[5383]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd562550f0 a2=3 a3=0 items=0 ppid=1 pid=5383 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:50.651000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:50.675205 systemd-logind[1599]: New session 18 of user core. Dec 16 13:02:50.683205 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:02:50.686000 audit[5383]: USER_START pid=5383 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:50.689000 audit[5386]: CRED_ACQ pid=5386 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:50.930847 sshd[5386]: Connection closed by 147.75.109.163 port 55652 Dec 16 13:02:50.933466 sshd-session[5383]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:50.934000 audit[5383]: USER_END pid=5383 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:50.934000 audit[5383]: CRED_DISP pid=5383 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:50.940184 systemd-logind[1599]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:02:50.941228 systemd[1]: sshd@17-172.239.193.244:22-147.75.109.163:55652.service: Deactivated successfully. Dec 16 13:02:50.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.239.193.244:22-147.75.109.163:55652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:50.943665 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:02:50.946849 systemd-logind[1599]: Removed session 18. Dec 16 13:02:52.161300 kubelet[2813]: E1216 13:02:52.161229 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-559c85bdcd-glvwq" podUID="3c1f37f8-b232-4ab7-9b50-17ad83754886" Dec 16 13:02:52.161734 kubelet[2813]: E1216 13:02:52.161485 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555f98f96-jlzdx" podUID="537eba7d-0fc3-4664-af16-ea0352a41fb1" Dec 16 13:02:54.160319 kubelet[2813]: E1216 13:02:54.160262 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8llz" podUID="1c658f10-e923-42a5-b425-72ee5f2a64c8" Dec 16 13:02:55.157790 kubelet[2813]: E1216 13:02:55.157744 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 16 13:02:56.003524 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 13:02:56.003611 kernel: audit: type=1130 audit(1765890175.999:851): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.239.193.244:22-147.75.109.163:50842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:55.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.239.193.244:22-147.75.109.163:50842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:56.000348 systemd[1]: Started sshd@18-172.239.193.244:22-147.75.109.163:50842.service - OpenSSH per-connection server daemon (147.75.109.163:50842). Dec 16 13:02:56.339000 audit[5398]: USER_ACCT pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:56.347126 sshd[5398]: Accepted publickey for core from 147.75.109.163 port 50842 ssh2: RSA SHA256:qlZEpC2nNcUiQgAT4tEG5E76OkyowqFzXOlbLlJ+gto Dec 16 13:02:56.345000 audit[5398]: CRED_ACQ pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:56.346782 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:02:56.348883 kernel: audit: type=1101 audit(1765890176.339:852): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:56.348936 kernel: audit: type=1103 audit(1765890176.345:853): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:56.354508 systemd-logind[1599]: New session 19 of user core. Dec 16 13:02:56.355193 kernel: audit: type=1006 audit(1765890176.345:854): pid=5398 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 13:02:56.345000 audit[5398]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9bc5fd40 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:56.361079 kernel: audit: type=1300 audit(1765890176.345:854): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9bc5fd40 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:02:56.362318 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:02:56.345000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:56.371049 kernel: audit: type=1327 audit(1765890176.345:854): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:02:56.368000 audit[5398]: USER_START pid=5398 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:56.380076 kernel: audit: type=1105 audit(1765890176.368:855): pid=5398 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:56.380125 kernel: audit: type=1103 audit(1765890176.378:856): pid=5401 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:56.378000 audit[5401]: CRED_ACQ pid=5401 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:56.582554 sshd[5401]: Connection closed by 147.75.109.163 port 50842 Dec 16 13:02:56.583226 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Dec 16 13:02:56.584000 audit[5398]: USER_END pid=5398 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:56.595042 kernel: audit: type=1106 audit(1765890176.584:857): pid=5398 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:56.595355 systemd[1]: sshd@18-172.239.193.244:22-147.75.109.163:50842.service: Deactivated successfully. Dec 16 13:02:56.584000 audit[5398]: CRED_DISP pid=5398 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:56.598398 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:02:56.600084 systemd-logind[1599]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:02:56.604156 systemd-logind[1599]: Removed session 19. Dec 16 13:02:56.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.239.193.244:22-147.75.109.163:50842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:02:56.605052 kernel: audit: type=1104 audit(1765890176.584:858): pid=5398 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 13:02:58.159556 kubelet[2813]: E1216 13:02:58.159506 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-579dd74c6f-9xbhn" podUID="7a67396c-f7d6-4b07-9621-f2a99ef21577"